Hacker Newsnew | past | comments | ask | show | jobs | submit | more uniqueuid's commentslogin

OTOH, posit funds a lot of development of important packages in the tidyverse and does a lot of community work etc.

So if maintaining RStudio is so much of a burden that it impedes the rest of their work, I don't think it's a bad idea to reduce the amount of work spent trying to compete with VSCode when that's an increasingly tough sell.

I'm not a fan of VSCode personally, but would probably be happy with a tmux setup with a console for R and some minimal output viewer, so people like me should be able to cobble something together that's a workable alternative to Posit.


> so people like me should be able to cobble something together that's a workable alternative to Posit.

There is Rkward[1], available from the repositories in many linux distributions.

Discovered it recently, because I'm currently learning R, and few linux distributions outside the Mandriva family offer Rstudio straight from the repositories, and I'm lazy to download and do a manual install every time.

[1] https://rkward.kde.org/


For many years, Emacs plus ESS was the standard workflow for R users. Vim users have their own plugin. On Windows, Tinn-R was impressive, and last I checked it was still going. Of course there is also Jupyter, PyCharm, and Eclipse, plus every text editor has support for R. If something were to happen to RStudio, there would be many fine alternatives for R.


It has always (or at least for decades) been.


Short answer: no

Tape is really complicated and physically challenging, and there are no incentives for people investing insane amounts of time for something that has almost no fan base. See the blog post about why you don’t want tape from some time ago.

Edit: https://blog.benjojo.co.uk/post/lto-tape-backups-for-linux-n...


> there are no incentives for people investing insane amounts of time for something that has almost no fan base

Like that has stopped anyone before? :p Probably explain why we haven't seen anything FOSS in that ecosystem yet though.


Profiles have always been great, but it's kind of unfortunate that this feature seems to be locked behind a sign-in (the link in the article describes the UI as being in the profile menu).

I mean, I've been using about:profiles for ages, but it would definitely be nice to have a bit more polish (e.g. every now and again I forget that a newly created profile is automatically promoted to default)

[edit] well seems I have to eat my words - there's a switch in about:config named "browser.profiles.enabled" that toggles a profiles menu item with some UI that apparently has existed for years. Nice!


Nice to know about that. Odd that it doesn't list any of my existing profiles though


You're right, it seems both the UI and the old about:profiles page do use the same underlying implementation, but the UI does not pick up any profiles added through the about: page. If you create a new profile from the UI, that will show up in about (after a restart).


It blows my mind.

I can't have profile without having a "Sign In" button in my toolbar. Mozilla… please. How is it possible, to ship a feature and do that…


I got that suggestion recently talking to a colleague from a prestigious university.

Her suggestion was simple: Kick out all non-ivy league and most international researchers. Then you have a working reputation system.

Make of that what you will ...


Ahh, your colleague wants a higher concentration of "that comet might be an interstellar spacecraft" articles.


If your goal is exclusively reducing strain of overloaded editors, then that's just a side effect that you might tolerate :)


Keep in mind the fabulous mathematical research of people like Perelman [1], and one might even count Grothendieck [2].

[1] https://en.wikipedia.org/wiki/Grigori_Perelman [2] https://www.ams.org/notices/200808/tx080800930p.pdf


all non-ivy league researchers? that seems a little harsh IMO. i've read some amazing papers from T50 or even some T100 universities.


Maybe there should be some type of strike rules. Say 3 bad articles from any institution and they get 10 year ban. Whatever their prestige or monetary value is. You let people under your name to release bad articles you are out for a while.

Treat everyone equally. After 10 years of only quality you get chance to get back. Before that though luck.


I'm not sure everyone got my hint that the proposal is obviously very bad,

(1) because ivy league also produces a lot of work that's not so great (i.e. wrong (looking at you, Ariely) or un-ambitious) and

(2) because from time to time, some really important work comes out of surprising places.

I don't think we have a good verdict on the Orthega hypothesis yet, but I'm not a professional meta scientist.

That said, your proposal seems like a really good idea, I like it! Except I'd apply it to individuals and/or labs.


You will effectively want a 48GB card or more for quantized versions, otherwise you won't have meaningful space left for the KV cache. Blackwell and above is generally a good idea to get faster hardware support for 4b (some recent models took some time to ship for older architectures, gpt-oss IIRC).


This is a Mixture of Experts model with only 3B activated parameters. But I agree that for the intended usage scenario VRAM for the KV cache is the real limitation.


It is immensely frustrating to me that to this day, after a solid century of advances in sciences and mathematical literacy, we are still implicitly stuck in the mental model of averages.

Reality is fucking far away from averages and we know it. "The economy is doing great/terrible" is an almost worthless indicator unless the person you're talking about actually has business relations into every corner.

Yes, there are interdependencies, but they do not justify that we pretend numbers are so expensive we can only print two of them (mean, sd) at a time. Let's finally stop drinking information through a 2 mile straw and instead show high resolution 2d data at least.

[edit] this is of course not a criticism of parent or OP, it's a systemic problem that we all are guilty of.


I agree with all of this. Many is time I've had to tell developers I work with: "don't just look at the mean/median, look at a graph of the full distribution!... then slice your distribution a lot of different ways by all the tags/facets you have and look again at the slices." Often you find that a shift in the mean or median was driven by one particular class of data points that skewed the whole thing. (Looking at you, NVDA.) This is usually a little lecture I give in the context of performance engineering, where it's api response times or whatever, but it applies everywhere.

At the same time - and I think you agree with this and it's probably implicit in your comment - we have to beware of anecdata as well. "Two of my friends asked me for money" means very little, except that your friend group is having a rough time. The meso-scale, your "high resolution 2d data", is where to look if you want a textured picture of what's really going on while at the same time avoiding observer bias. Unfortunately, that kind of data is not always easy to get, or to interpret.


Since it seems unclear what they do and why it matters:

Clockss seems to be an organization designed to make sure scientific content does not disappear library of Alexandria-style.

The most important task here is being legally safe, which is why they emphasize ivy league credentials, distributed nature, audits and so on. Technically it's not really difficult (except perhaps for dealing with publisher captchas heh).

They are legally safe because of this mechanism:

> Digital content is stored in the CLOCKSS archive with no user access unless a “trigger” event occurs.

All in all I think it's absolutely necessary.


And that relates to whole journals, not single papers, apparently.

https://clockss.org/triggered-content/


An interesting read.

This kind of problem is exactly what statistics is designed to do, and it makes me a bit sad that we are left with a bit of a shoulder shrug. It's absolutely possible to do a much better job at disentangling possible causes here with something as simple as a multilevel regression. (Although ok, proper causal inference would be more work).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: