No, it's really, really not. It would be nice if we could have a discussion of acceptable levels of risk versus inefficiency without immediately taking a left turn into "Soylent Green is People!" territory.
You could start by telling us why the statements aren't the same, using examples and facts.
You seem to want to have an abstract, bloodless, clinical discussion of when it is acceptable to kill people in the name of "efficiency" (I will interpret this as "profit").
I can tell you that is a dishonest discussion, if that's what is intended. If you want to have it honestly you will have to first indicate an understanding of exactly, in detail, the most evil things that will happen as a result. Also the most devastating things that will happen and be protected from proportionate legal recourse under your proposed or imagined regime.
This is a non-statement intended to deflect into the abstract and away from actual consequences.
Policies and regulations aren't made in some Platonic ideal universe, they are made in specific, factual circumstances.
Come back to us when you can talk about how a specific "acceptable" level of risk does not involve enabling evil actors, and indemnified doers for a specific policy area.
Then we'll talk about the specific things you find acceptable, and just whose death and suffering you will trade for profit.
I don't read their statements the same way you do. The way they come across to me is that there needs to be a discussion of risk grounded in the understanding that in many (most?) domains, zero risk is unattainable.
It's like the idea of the FDA setting limits to the amount of insect contaminates allowed in food. At first, it seems disturbing until you realize that if they put a zero tolerance policy in place, there would be near-endless grounds to sue every food manufacturer out of business.