Hacker Newsnew | past | comments | ask | show | jobs | submit | fzingle's commentslogin

So is insulin. But at least in the US, it is expensive.


Well, not so much anymore. It's capped at $35/mo if you're on Medicare or ACA and most other sources.

https://investor.lilly.com/news-releases/news-release-detail...

I'd also mention that most of the insulin these days is not extracted from animals as it once was. One can make arguments for/against this, but for safety reasons almost all of it is now made with biopharmaceutical (bacterial recombinant) processes.


It is my understanding that the expensive insulin is actually more complicated than the insulin that we had a couple of decades ago and that makes it easier to dose correctly so that therapeutic outcomes are improved.


That was due to cartel behavior (and now should be limited by the recent drug capping initiative) - elsewhere in the world insulin is extremely affordable.

Insulin is an interesting case because the insulin you get today is manufactured in a different process than the original batches which has let manufacturers skirt around both generic alternatives and the original patent.


Off patent it's cheap, but the on patent stuff is way better, so it's what everyone wants


I believe it's expensive for the delivery system, not the insulin itself. (Could be wrong).


Our economic model encourages this kind of race to the bottom enshitification of everything. Unfortunately there are no high-tech solutions to this problem. The technology we need to improve is our political/economic system.

Perhaps with wealthy country populations projected to fall dramatically we will finally be forced to find a way other than "growth" to value human endeavour. That would be the most likely path to a solution, I fear it will be rather painful.


Our economic model (is supposed to) boil down to producing our goods and services using the least amount of resources. Sure, that yields planned obsolescence and enshittification, but also cheap multi-GHz laptops and widespread Internet availability.


> Does it make sense to degrade the performance of good software teams because bad software teams exist?

Consider the classic statistic "most drivers think they are above average".

I posit that the same is true of software teams, almost every team will self-assess as above average, i.e. good. Those teams will then imagine that, being good, they build quality into the process and very little verification QA is done.

I have worked as a software consultant for 15 years now. I've worked with at least 40 separate software teams in that time. Every single team manager would pep talk with "this is the best team I've ever seen". Some of this is obviously blowing smoke to get people to work harder and feel good. But over the years I've had candid conversations with managers and realized that most of the time the genuinely think their team is really good, truly top 10-20%.

Here's the rub. Being a consultant, I'm almost always brought in by higher level management because something is going horribly wrong. The team can't deliver quickly. The software they deliver is bug ridden. They routinely deliver the wrong software (i.e. incorrect interpretation of requirements.)

Often times these problems are not only the fault of the development team, management has issues too. But in every single case, the development team is in dire straits. They have continuous integration sure, and unit tests, and nightly builds, and lots of green check marks. But the unit tests test that the test works. The stress tests have no reality based basis for expected load. The continuous integration system builds software but it can't be deployed in that form for x, y & z reasons, so production has a special build system, etc...

In 15 years I have never once encountered a team that would not benefit from a QA team doing boring, old school, black box manual testing. And the teams that most adamantly refuse to accept that reality are precisely those that think they are really top tier because they have 90+% unit test coverage, use agile and do nightly builds.

So, my question is, do you (I don't mean the specific "you" here, rather everyone should ask themselves this, all the time) think that most bad software teams know they are bad? Including the one you are part of? Would it really hurt to have some ye olde QA, just in case, you know, you are actually just average? :)


I'm curious: in your many years of being a consultant to these bad teams, where the manager really thought they were top 20%, did you get a chance to talk to the rank-and-file team members, and did they paint a very different picture of the team health and software quality than their manager?

Also, did you run across any orgs where they basically refused to use a process like Agile, and instead just did ad-hoc coding, insisting that this was the best way since it worked just fine for them back when they were a 5-person startup?


Not parent, but in my experience as a consultant working with bad teams, the rank and file were 'doing the job.'

You usually had a few personality archetypes:

- The most technical dev on the team, always with a chip on their shoulder and serious personality issues, who had decided to settle for this job for (reasons)

- The vastly undertrained dev who was trying to keep up with the rest of the team, but would eventually be found out and tossed, usually to blame for a major issue

- The earnest and surprisingly competent meek dev, who presumably didn't have enough confidence to apply to a better job, but easily could have made it on merit, work ethic, and skill

- The over-confident dev who read a bit of SDLC practice, and could see every tree while missing the forest

The key is that, aside from the incompetent person, they had all always been working there for awhile. Consequently, there wasn't good or bad health and quality: there was just "the system" (at that company) and dealing with it.

And none of these folks ever worked at 5-person startups. ;) I think it was definitely more an issue of SDLC "unknown unknowns" they should be doing, than willful decisions not to.


> I'm curious: in your many years of being a consultant to these bad teams, where the manager really thought they were top 20%, did you get a chance to talk to the rank-and-file team members, and did they paint a very different picture of the team health and software quality than their manager?

Yes, generally I join teams and work as an engineer or sometimes as a team lead, so I'm talking to all the team members.

Most start up teams are composed of junior developers, often pretty smart people. Usually 5 or fewer years of experience. Many times these are people who have already accomplished stuff they didn't think they could do. So that generally means that yes they think pretty highly of themselves. To a degree it is quite justifiable, they tend to be very accomplished but in a narrow domain. Unfortunately they don't realize that their technical accomplishments in a specific field does not mean that they are experts everywhere. Their managers understand that these are smart people and assume again that this is therefore a good team.

Non start ups that I join are usually just plain dysfunctional.

> Also, did you run across any orgs where they basically refused to use a process like Agile, and instead just did ad-hoc coding, insisting that this was the best way since it worked just fine for them back when they were a 5-person startup?

Usually more the opposite. In my experience I come across teams that are sure they must not need any help because they follow all the rules in Scrum and have great code coverage metrics.

It is really common to see this kind of thing. I call it "the proxy endpoint fallacy". It can crop up anywhere that there is something that can be measured. In that example, it would be confusing adherence to Scrum with having a working SDLC or perhaps confusing code coverage metrics with the objective of having bug-free releases.

This isn't a software only fallacy. In politics, GDP is often confused with societal well-being. Always be wary of your metrics and change them as required to keep you tracking your actual goals.


Depending on the shape of the distribution, most drivers could be above average. Average doesn't imply 50th percentile, that's what the median is for. A minority of tremendously poor drivers could certainly mean that most drivers are in fact better than average, in the same way that my friends on average have more friends than I do.


> Have never seen a company so quickly and completely just throw away all of their public good will.

This behaviour isn't really all that uncommon. An obvious recent example would be Boeing with the 737 Max.

To save a few bucks they've thrown away a quality process that was best-in-class. Only cost a few billion a few years later...

Our implementation of capitalism rewards only short-term thinking, that is why this behaviour persists.


This article about the state of general computer knowledge of university students might shed some light on things: https://www.theverge.com/22684730/students-file-folder-direc...

Not only do students come into university (and sometimes even into CS) not knowing what a file system is, many of them have a total lack of interest in learning what is perceived by them to be pointless.

I'd argue it is going to be pretty difficult to engage with any of those foundational topics if you aren't willing to engage with the basic metaphor of most operating systems, files and directories.


Maybe they are considering that we shouldn't build and optimize our society solely for the purposes of maximizing revenue.

Would it be better to live in a world where Twitter (for example) existed because it is a useful thing and not because it might make lots of money?


Doesn't it lose a lot of money and its usefulness is directly correlated with its current massive usage.


It ran about break even until very recently.


You should reduce your consumption as much as possible. In the case of cars, buy a used one when you need one.

Don't give companies that you don't think have earned your money your business.


Does anyone know the technical details of the decompilation? What tools were used, what problems they ran into, etc?


This is the first episode of a documentary series called Funny Business [1].

The fifth episode is also on Youtube [2]. Does anyone have links for the others, I haven't been able to find any...

Edit, found the fourth [3] and sixth [4]

[1] https://en.wikipedia.org/wiki/Funny_Business_(TV_series)

[2] https://www.youtube.com/watch?v=bICuqEHWS8Q

[3] https://www.youtube.com/watch?v=3PNeOifT6kI

[4] https://vimeo.com/438865608



> AGI is inevitable because computation is universal and intelligence is substrate independent.

What the article is asking, albeit obliquely is: how do we know that to be true?

It is very difficult to prove when definitions for intelligence and consciousness are fuzzy and not widely agreed upon.


I see people assert this all the time. Intelligence is the ability to achieve goals. Consciousness is contents of what you are aware of.

The definitions are irrelevant. People want a definition to do demarcation. But demarcation is boring. I don't care whether some threshold entity is on this or that side of the line. The existence of some distinct territories is enough for me.

There is no reason to imagine AIs are prohibited from occupying territory on both sides of the line.

AIs aren't climbing up a ladder, non local exploration is possible


> Intelligence is the ability to achieve goals

A more precise definition by François Chollet: The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty

https://arxiv.org/pdf/1911.01547.pdf

So a system is more intelligent if it can solve harder tasks with fewer trials and prior knowledge. Intelligence is always defined over a scope of tasks, for example human intelligence only applies to the space of tasks and domains that fit within the human experience. Our intelligence does not have extreme generalization even though we have broad generalization (defined as adaptation to unknown unknowns across a broad category of related tasks).


I read it more as asking about deep-NN models specifically.

If it really was trying to suggest that computation isn't universal or that our intelligence is non-physical or something, that would be a whole different problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: