Hacker Newsnew | past | comments | ask | show | jobs | submit | TomasBM's commentslogin

Your criticism of this study is roughly on point, IMO. It's not badly designed by any means, but it's an early look. There are already similar studies on the (cognitive) effects of LLMs on learning, but I suspect this one gets the attention because it's associated with the MIT brand.

That said, these kinds of studies are important, because they reveal that some cognitive changes are evidently happening. Like you said, it's up to us to determine if they're positive or negative, but as is probably obvious to many, it's difficult to argue for the status quo.

If it's a negative change, teachers have to go back to paper-and-pen essay writing, which I was personally never good at. Or they need to figure out stable ways to prevent students from using LLMs, if they are to learn anything about writing.

If it's a positive change, i.e., we now have more time to do "better" things (or do things better), then teachers need to figure out substitutes. Suddenly, a common way of testing is now outdated and irrelevant, but there's no clear thing to do instead. So, what do they do?


I've also noticed this expectation. Where does it come from?

FOSS means that the code to be free and open-source, not the schedule or the direction of its developer(s).


I dunno, I think at one point there was a similar merge as to what happened with "git and "github" where "open source the licensing" somehow became the same as "open source the collaborative and open software development process", and nowadays people get kind of confused when you say you're doing open source yet you don't accept pull/merge requests.

I propose the FOOSSNO license, fuck off its open source, no obligation, for communication purposes. ;-)

Maybe WTFPL can send the message across? Could maybe make a V3 and add as a second point to it: "1. And don't tell me/ask me about it, just DO WHAT THE FUCK YOU WANT TO"

My 2 cents:

- You now likely have the money/time to pursue passions you didn't know you have, or would have developed if you didn't pursue software development as intensely.

- Even if you had/have passion for computers, being paid to do something you wouldn't do otherwise can quickly drain that passion.

- We're built for sunlight and exercise, not LED light and sitting, so you may have felt increasing physical discomfort that only the former can alleviate.

- Woodworking and farming were never lucrative enough (or as lucrative as computer work) to convince you to make the switch for money.


> Even if you had/have passion for computers, being paid to do something you wouldn't do otherwise can quickly drain that passion.

Do you think this is true if you have your own apps/products or similar ?


Depends if you find that fun enough to counterweigh the bad sides of work.

There's a common trope that monetizing your hobby is a good way to start hating that hobby. But it doesn't have to be that way; a lot of people love what they do.


Although I agree with your overall point, there is a middle ground here: (commercially) non-free but open source software.

I believe that's where the biggest disagreement ITT lies. There are currently good ways to do FOSS, proprietary closed-source and free closed-source software development. But if the OSS is worth charging for (commercial) use, devs are left with asking for donations, SaaS or "pay me to work on this issue/feature".

There arguably should be better mechanisms to reward OSS development, even if the largest part of an OSSndev's motivation is intrinsic.


I'm sorry, but in my mind, open source and commercial don't mix. What would a license for that even look like?

I'm not saying going commercial is bad, go for it. I'm just saying that when we are talking open source, we are not talking money, that's all. Money is just one of many things in life, sure it's important, but not by itself.


In addition to other great suggestions (a good night's sleep being #1), two things come to mind:

- Write down thoughts that pop-up and seem potentially useful, and then forget about them. It's easier said than done, but you have to balance "this may seem useful later" with "I got better things to do now". For me, knowing I have the idea recorded somewhere puts my mind at ease.

- Feel free to just get rid of accumulated browser tabs, random to-do's or even mental notes. If you always have multiple things open, do a hard reset every now and then. It's difficult to let go at first, but you may realize that if an idea was really worth entertaining, it will come back to you. No great contribution is the result of one single thought, IMO.

Note: I lean closer to scattered attention than AD(H)D, because it doesn't affect my normal functioning. Sure, it gets progressively more annoying when deadlines are looming, but it also provides a great source of creativity.


I don't do research that requires fieldwork, but even in office and industrial settings, I notice that there's less need and interest in visits.

Of course, in-person exchanges still happen, but there's something of a default to do most things remotely because it's more efficient (and honestly, easier for all parties involved). The result is that you don't get to see cool or unusual machines/setups that often, and some flair of doing research is lost.

I can imagine that that's especially painful for new ecologists, because fieldwork is also a way to experience things that you otherwise wouldn't. Hopefully, we can bring some of it back with edge devices and models.


Somewhat tangential to the article, but why is SQL considered a programming language?

I understand that's the convention according to the IEEE and Wikipedia [1], but the name itself - Structured Query Language - reveals that its purpose is limited by design. It's a computer language [2] for sure, but why programming?

[1] https://en.wikipedia.org/wiki/List_of_programming_languages

[2] https://en.wikipedia.org/wiki/Computer_language


"structured query language" is actually a backronym, SEQUEL is indeed a programming language and the only mainstream 4GL. consider the output of the compiler (query planner) is a program with specific behavior, just that your sql code is not the only source - the other inputs are the schema and its constraints and statistics. it's an elegant way to factor the sourcecode for a program, I wonder if Raymond Boyce didn't die young what kind of amazing technology we might have today.


With support for Common Table Expressions (CTE), SQL becomes a Turing complete language. To be honest, it makes me somewhat uncomfortable that a query sent to a DB server could be non-terminating and cause a server thread to enter an infinite loop. On the other hand, the practical difference between a query that contains an infinite loop and one that runs for days is probably negligible.


To be honest, I'd like to chip in that it is technically possible to write brainf*ck, an esoteric programming language but nonetheless, its a programming language

https://www.reddit.com/r/SQL/comments/81barp/i_implemented_a...

Btw this runs in sqlite, you can try it yourself if you are interested.

Source: I was thinking of creating a programming language paradigm like sqlite/smalltalk once where resumed execution/criu like possibilities were built in. Let me know if someone knows something like this too. I kinda gave up on the project but I knew that there was this one language which supported this paradigm but it was very complicated to understand and had a lot of other first time paradigm like the code itself / the ast tree is sort of like a database itself but so the tangential goes.


Because stored procedures do exist, and there isn't a single production quality RDMS that doesn't go beyond DDL and DML, adding structured programming extensions.

Also, even within the standard itself, it allows for declarative programming.


What is your definition of 'programming language'?


It should have arrays, and loops and conditionals.


Slightly simplistic: table rows cover arrays, recursive CTEs cover loops, and JOIN/WHERE cover conditionals.


OK. My definition is that it should be able to add two integers together and give you a result somehow. So in SQL:

    select 1+1; -- Result: 2
In HTML: not possible.

That's the key difference.


Because "programming language" is an adjective or a descriptive term. Whatever looks like a programming language, can be called a programming language.


Also SQL is not turing complete. I see it more as a descriptive language like e.g. html is a language but not a programming language.


This is completely wrong. The SQL spec isn't Turning complete but multiple DBs provide Turing complete extensions (like pgplsql) that make it so. Also, even without the extensions, it is still very much a programming language by any definition of the term. Like most takes on SQL, it is more about your understanding (or lack thereof) of SQL.


I was under the impression that recursive CTEs make the SQL spec Turing complete. Not that it makes any practical difference, it's still very difficult to use for "general purpose" programming, and still very practical to use for data processing / management.

Last year I read about some database researcher who implemented AoC in pretty standard SQL.


If the spec isn't Turing complete, only individual extensions to the spec, I think it's correct to say "SQL isn't Turing complete".


It can do loops and recursion. It can use as much memory as it is allowed. It can do general programming via functions and stored procedures.


It can't do loops. Unless you're talking about extensions to SQL such as PL/SQL and T-SQL.


It can do loops. Recursive CTEs has been in the standard since SQL:1999

https://en.wikipedia.org/wiki/SQL:1999


While I agree that we need a word for this type of behavior, hallucinate is a wrong choice IMO.

Hallucinations are already associated with a type of behavior, which is (roughly defined) "subjectively seeing/hearing things which aren't there". This is an input-level error, not the right umbrella term for the majority of errors happening with LLMs, many if which are at output-level.

I don't know what would be a better term, but we should distinguish between different semantic errors, such as:

- confabulating, i.e., recalling distorted or misinterpreted memories;

- lying, i.e., intentionally misrepresenting an event or memory;

- bullshitting, i.e., presenting a version without regard for the truth or provenance; etc.

I'm sure someone already made a better taxonomy, and hallucination is OK for normal public discussions, but I'm not sure why the distinctions aren't made in supposedly more serious works.


I mean, I think you're right that confabulation is probably a more correct technical term, but we all use hallucinate now, so it doesn't really matter. It might have been useful to argue about it 4 or 5 years ago, but that ship has long since sailed. [1]

And I think we already distinguish between types of errors -- LLM's effectively don't lie, AFAIK, unless you're asking them to engage in role-play or something. They mostly either hallucinate/confabulate in terms of inventing knowledge they don't have, or they just make "mistakes" e.g. in arithmetic, or in attempting to copy large amounts of code verbatim.

And when you're interested in mistakes, you're generally interested in a specific category of mistakes, like arithmetic, or logic, or copying mistakes, and we refer to them as such -- arithmetic errors, logic errors, etc.

So I don't think hallucination is taking away from any kind of specificity. To the contrary, it is providing specificity, because we don't call arithmetic errors hallucinations. And we use the word hallucination precisely to distinguish it from these run-of-the-mill mistakes.

[1] https://trends.google.com/explore?q=hallucination&date=all&g...


Your experience sounds rough.

My experience: I often thought that I didn't have the time to learn (hard) things, only to find out sooner or later that I actually did, and still do.

At work, this usually meant that I was giving myself tighter deadlines than they needed to be, or that I was putting too much effort into tasks nobody cared that much about. Over time, I learned that it's OK not to put 100% of energy into the assigned task. Sometimes, it's even encouraged to use that extra energy to learn.

Arguably, I did have the privilege of starting out in salaried European office jobs, where there are more robust boundaries and opportunities. It's obvious how precarious physical work discourages this kind of learning. And reading comments like yours, it's clear how lucky I was to have managers and environments that didn't exploit my eagerness to put pressure on myself.

But if you do have an opportunity to make adjustments, I'd suggest putting less pressure on performing like an athlete, and channeling that energy into learning opportunities instead. Rarely will anyone carve out time for your learning, but they may be responsive to your request or boundaries.


Small correction to your point: they perhaps provide a reason for peer review to happen, but it's scientists themselves who coordinate and provide the actual peer review.

That is, unless ACM and Nature have a different approach to organizing peer review, in which case my correction is wrong. But I believe my point stands for many conferences and journals.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: