Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    P(B|I saw E, P) = P(I saw E|B,P) * P(B|P) / P(I saw E|P)

    P(B|E was false, I saw E, P) = P(E was false|B,I saw E,P) * P(B|P,I saw E) / P(E was false|P, I saw E)
This is a pretty basic application of Bayes' theorem.


Love it: p(I saw E) and p(I didn’t really see E).

Just move the argument one level down: “I saw E is false” and it turns out so is “E is false” . So then? Add “E was false was false”?

Turtles all the way down.

At some point something has to be “true” in order to conditionalise on it.


I believe you can condition on a probability of proposition.

For example, if you are in a fairly dark room and you observe with 90% confidence a red object. Then you can do (iirc) P(X | 90% confidence see red object) = 90% * P(X | see red object) + 10% * P(X | do not see red object)

I would think that in principle, this allows for allowing all observations to be fallible, without any kind of “infinite regress” problem? You just apply the same kind of process each time.


Yes sure, here are a few truths that never disapointed me:

There is an absolute universal truth.

Absolute universal truth, as a whole, is unreachable even to the most intelligent and resourceful human that will ever exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: