Hacker Newsnew | past | comments | ask | show | jobs | submit | rdubz's commentslogin

78.0.1 doesn't look yanked to me https://pypi.org/project/setuptools/78.0.1/

afaict they just released 78.0.2 with a revert of the offending bits


The graphs a couple folds down seem to justify the "alarmist" tone.


I learned about parquet2json from HN recently, and would use it for this:

```bash

parquet2json cat <file or url> | grep ...

```

https://github.com/jupiter/parquet2json


interesting, I've had a much easier time pairing (and felt benefits like you describe) over Zoom. Pairing by actually having 2 people at a desk never felt very comfortable, but screen-sharing with each person on their own monitor etc. works great, IME. I'm curious what you've tried / why it hasn't worked...


Back in the day, about 15 years ago now, when I was pair-programming every day and trying to teach new people how to do it, everyone was awkward at first. People who were Vim experts suddenly started stumbling over basic tasks. All of them who persisted eventually got over the awkwardness. Not all of them enjoyed the pair programming, but the ones who I talked to later found that it wasn't as awkward after a bit.


I had saved the chart here, Google reverse image search led me to this article https://streets.mn/2014/05/22/chart-of-the-day-travel-effici... which says it's from Scientific American.

There were a few other hits as well. This one mentions the Steve Jobs quote and corroborates "Scientific American, 1973" https://www.smestrategy.net/blog/using-the-6-thinking-for-st...


> This is a pure panic run on the bank though, irrational and counter-productive. The bank was not insolvent and could have been fine if everyone didn't withdraw all at once.

This is incorrect. SVB converted deposits to risky paper that lost value. They were insolvent.

The fact that the risky paper would return its promised 1%/yr, if everyone just waited 10 years, is a canard. SVB's depositors could get 4% elsewhere, today. Asking them to sit tight at 1% is the same haircut as liquidating that paper at a loss today (which is what happened).


Is this accounting for inflation?

4% inflation per year (the most common estimate I see in the US) means 3x over 30 years doesn't even break even (1.04**30 ≈ 3.24).

I often wonder how much people obsessed with home prices rising (in the US at least) take this into account. How much housing mania is fueled by people getting excited about gains that aren't as real as they think?


I did some very rough spreadsheet work, collecting average UK house prices[0] and rates of inflation[1] and tried to figure this out. So starting with January 2007's average price of GBP 176,758 I tried to repeatedly apply the annual interest rates I end up with:

* Jan 2007 price (inflation adjusted to 2022): GBP 243,199

* Jan 2022 price (actual): GBP 273,762

So it seems that in the UK at least the prices do seem to grow ahead of inflation. And the starting price we're talking about here is after a long, sustained housing bubble and was already quite unaffordable for many. Further still I think many people's wages have kept pace with inflation.

So looking at the raw price changes doesn't tell the whole story, but the whole story is still quite grim.

[0] = https://www.statista.com/statistics/751605/average-house-pri...

[1] = https://www.worlddata.info/europe/united-kingdom/inflation-r...


>Further still I think many people's wages have kept pace with inflation.

Inflation: 243,000/177,000-1=37%

House: 274,000/177,000-1=54%

In 2007 the yearly wage for a 22-29 year old was 20,000 pounds, in 2021 it was 26,000:

https://www.statista.com/statistics/802196/full-time-annual-...

26,000/20.000-1= 30%

If you try with an higher income, let's say 30-39, respectively 26,000 and 33,000

33,000/26,000-1= 27%

I think we can say that average houses have appeciated almost double average wages in these 15 years, which is the essence of the crazyness about houses being not affordable to most, and - as you said - it's not like in 2007 houses were cheap, data for a longer period show even more how young people then could actually buy a house and now it has become impossible:

https://landregistry.data.gov.uk/app/ukhpi/browse?from=1990-...


Ah damn I messed up part of my comment, it should've read:

> Further still I don't think many people's wages have kept pace with inflation

But yeah either way - houses were expensive, and have only gotten more unaffordable as inflation outpaced wage growth and house prices outpaced both. Wild.


This is great point I often make. Not many people take this into consideration. Now, inflation has been lower than 4% for last 10-20 years. But the point stands.


"Each year, 1.35 million people are killed on roadways around the world."

https://www.cdc.gov/injury/features/global-road-safety/index...

Seems to me that traffic deaths are "caused by" humans... not totally surprised they have decided those don't count, but I feel that that's wrong.


Well... cars are the dominant species on the planet so it makes sense not to count them as human caused deaths.


s/on the planet/in the US/


US is 6th, so not far off, but fewer cars per person than the rampant car culture of ... Iceland and New Zealand?

https://en.wikipedia.org/wiki/List_of_countries_by_vehicles_...


Even in Europe, cars are quite common for everyone. Maybe less for the middle class in big cities, but the traffic in London and Paris is quite terrible.


The US is a pretty disparate place. Here you go:

https://en.wikipedia.org/wiki/List_of_U.S._states_by_vehicle...

If you compare to Iceland and New Zealand, you have to compare with rural US states. But still, not as bad as I thought TBH.


In many of those countries there are so many cars because they’re needed during peak tourist season

And guess where those tourists are from


Probably excluded because they're deemed unintentional. Of course the mosquitoes don't kill us intentionally either.


If mosquitoes are being credited with all the deaths by illness that they cause, it seems fair to credit humans for unintentional deaths also.


By that logic pretty much every accidental death that's a result of the modern world would be "caused by humans". House fires, heavy equipment mishaps, cancer from exposure to exotic man-made substances, overdoses, icy staircases, it's a long, long list.

Seems to me that adding all that crap in would make the category so broad as to defeat the point of categorizing. IDK if that was your point or if you just wanted your pet issue in the category.


We have many ways to lower car accidents and deaths in a country like the US. No effort is given to public transportation. Not counting them wouldn’t make sense unless enough effort was put in to lower those numbers. An accidental house fire isn’t the same thing at all. There weren’t major decisions made to purposefully allow sprawl and lack of funding into something that lowers house fires.


... old age.


Interesting case, but some of these are more like suicides (intentional or otherwise), and I think we were talking about one organism killing another. But still, you're correct, some significant portion of these must be one human killed by the actions of another.


Back of envelope:

The 10MB estimated size came from [100 bytes per row] * [100k rows].

50 of the bytes per row were "description", which should compress well (2-3x, I'd guess).

40 bytes per row were the IPFS ID/hash, IIUC. I assumed this is like a Git hash, 40 hex chars, which is really just 20 bytes of entropy.

He also estimated 14 bytes for the size (stored as a string representation of a decimal integer, up to 1e15 - 1, or 1PB?). That's about 50 bits or 6-7 bytes, as a binary integer. Sizes wouldn't be uniformly distributed though so it would compress to even fewer bytes.

So if SQLite was smart (or one gzips the whole db file, like you did), it makes sense that a factor of 2 or so is reclaimable.


if you are "cutting and pasting from the notebook into a .py file" you should look at `jupyter nbconvert` on the CLI.

I think there's ways to feed it a template that basically metaprograms what you want the output .py file to look like (e.g. render markdown cells as comments, vs. just removing them), but I've never quite figured that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: