Hacker Newsnew | past | comments | ask | show | jobs | submit | CrispinS's commentslogin

I suppose it's fitting that an article concerned largely with AI was written largely by AI. (I noticed a lot of GPT-isms.)

I mean, it is mostly solid advice, e.g. asking AI to cite sources (and checking them!) and asking about the assumptions it's making.

And on the subject of automating things or making things more efficient, I'd extend that to a general reminder that just because things are the way the are, doesn't mean they have to be that way.

Which sounds obvious, but it's so easy to get used to a situation in your life that you don't like, but it's not so horrendous that you're motivated to do something about it. And then it just becomes background and you forget that there's the possibility of a better reality.

Speaking from many personal experiences here...


Did you use an AI to tweak/refine your comment? It's:

* Written more formally than the typical HN comment

* Uses uncommon language like "jocular asides" and "whimsical similes"

* Fails to recognize that those mentioned phrases are cliches that people have been using for ages, long before LLMs

In short, recalibrate your AI radar, it's malfunctioning.


Heh, I guess so. It's just an uneasy feeling I can't get rid of. Maybe I'm just being paranoid. Then again, I wonder if the said greentexts are AI-generated still. At least the contents are likely to be fakes.


AI isn't this funny.


Not everything you dislike is AI. I don't see any signs at all of AI authorship.


I actually didn't dislike the premise of the article at all, and agree with some/many of the points (I've even favourite'd it). It showed a perspective I hadn't explicitly thought of before.

The sentence structures I mentioned in my earlier comment are what are often associated with AI. Once you start noticing them, you'll find them a lot on online content. Lmk if you want to learn more, there's a YouTube video on identifying AI comments. I had independent found many of them myself, which would be very unlikely if these were genuinely not ai language traits.


I agree with your last sentence, but on the subject of positive portrayals of US armed forces, the studios actually have an incentive to play nice. The DoD will let film productions use real equipment and personal, but only after vetting the script and making changes as they see fit.

For example, the Transformers movies: https://www.wired.com/2008/12/pentagon-holl-1/

The general concept: https://en.m.wikipedia.org/wiki/Military-entertainment_compl...


I can't believe a software developer is using an operating system/pdf viewer that isn't patched for security vulnerabilities as major as an RCE.

Unless this was a zero day, but I would have assumed the article would mention that fact ..


I really wish we had details here too, but someone made a good point:

"Hey, you need a PDF viewer with scripts enabled for the digital signing.. can you install Adobe XXX?" would be a good line to get the mark to use a less-than-secure PDF viewer.

But also, since it was the North Korea hacking group, I'm not ruling out a 0-day... hopefully more details will come at some point.


Emojis? For the current weather, a single emoji could convey quite a lot; e.g. a snowflake for sub-60 weather (I have a low tolerance for cold), a sun for 60-80, fire emoji for 80+...

Now, I don't know if anyone truly needs the weather in their terminal prompt, but it is doable.


Microcharts or sparklines are another option. I've seen a few implementations along these lines for shell prompts / shell use.

This might be useful for temperature, humidity, wind, preciptitation, and similar measures, either as quantities or timelines.

https://en.wikipedia.org/wiki/Sparkline

https://github.com/deeplook/sparklines

Similar:

https://www.linux-magazine.com/Issues/2016/183/Calc-Conditio...


perhaps the RPROMPT would be better. i usually use it to show time %T.


curl 'wttr.in/?format=%c'

see the readme here for all options:

https://github.com/chubin/wttr.in


> The second link returned on him was from ADL. No way that's an organic result.

It might be, actually. I understand why you'd think that, but look at the results for other search engines.

Kagi: ADL in 2nd place

Bing: ADL in 3rd place

Yandex: ADL not on the first page, but SPLC[1] is the the 6th result

[1]: https://www.splcenter.org/fighting-hate/extremist-files/indi...


This logic kind of fails quickly. I bet you wouldn't use it to show that Tiananmen Square did not happen, by showing all Chinese Search Engine are in apparent agreement on it not happening.


Well, no, which is why I threw in Kagi and Yandex as well. I can imagine Google and Microsoft altering rankings for certain results for political reasons, but Kagi seems too small to care about that, and Yandex isn't operating from the same political playbook as western corporations.

Now, in defense of your theory, I did double check Kagi and found out that they use Bing and Google for some queries, so the only truly "untainted" one is Yandex, which doesn't have ADL on the first page, or the next five that I checked.

That said, as I mentioned they do surface SPLC, which is similar in tone and content.

Limited sample size, but I think it's still plausible that ADL is an organic result.

I also checked Yahoo, and it has ADL as the third result.

I checked Baidu and Naver, and didn't see ADL, but I assume they're prioritizing regional content.


Does it often happen to you that you talk about Ai and, three minutes later, find yourself arguing with every search machine on the planet that it’s impossible that someone would say nasty things about your favorite fascist?


Guess it depends on the "algorithm" but if we were still in the PageRank era there's no way in hell ADL or SLPC would be anywhere near the top results for "Alex Jones", considering how many other news stories, blogs, comments, etc. about him exist.


The PageRank era ended almost immediately. Google has had a large editorial team for a long, long time (probably before they were profitable).

It turns out PageRank aways kind of sucked. However, it was competing with sites that did “pay for placement” for the first page or two, so it only had to be better than “maliciously bad”.


There's a PEP for adding support for a __pythonpackages__ directory, similar to node_modules.

https://peps.python.org/pep-0582/

Unfortunately the PEP is from 2018 and is still being discussed. The last post in the comment thread is from February.

https://discuss.python.org/t/pep-582-python-local-packages-d...

That's unfortunate; until I went looking for citations I thought it was further along and actually scheduled for the next Python release.

Looks like there is a pip-alternative that implements the PEP, but I haven't played around with it yet.

https://pdm.fming.dev/


I see comments like this all the time on these sorts of articles, and I have two criticisms:

First, although "AI exhibits racial bias due to biased training data" is far more accurate, I think it's perfectly acceptable to condense that to "AI is racist". Especially in the headline of an article that goes on to explain the issue in detail.

Second, I would say that even racist humans are racist because of bad training data, so if we're fine calling people racist, why not AI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: