Moonwood the Hare wrote:If we are dealing with all swans everywhere then I just don't see how we get any probability about them at all.
Well, at this point we have explored just about everywhere. Keeping in mind that an animal can only be but so different and have it still be reasonable to call it a swan (as opposed to say, a penguin), we can make reasonable statements about the habits that a swan could live in. This doesn't mean that a relative of a swan couldn't adapt to a very different environment and this line of thought is pretending that there are clear lines between species, which isn't exactly true... but I digress. This means we don't have to worry about sampling bias introduced by the fact that there are underwater caves we haven't explored, for example. Given the way we define what a species is, a swan on another planet is a scenario that is pure fantasy. Reason being, a swan-like being evolving on another planet, would not be a swan given their separate evolutionary histories. So, non-earthly swans require traveling back in time since we can't send actual swans to other planets now, so it would have to be done by someone in the future. All of that means that the "everywhere" in question is actually quite finite and well sampled. But, clearly, determining the appropriate bounds to place on "everywhere" isn't always easy. For example, we have to realize that we have been talking about naturally occurring swans. Genetic modifications could be made to create various shades of gray, if not other colors entirely.
Moonwood the Hare wrote: I am aware of Baysean ideas but I really know nothing about them. I know Chalmers put a chapter on that in the third edition of 'What is this Thing Called Science' and drew very negative conclusions about their usefulness. I'd like to understand more but not sure I would grasp it.
The basic idea is that what you know about the world matters. You take your assumptions (prior probabilities) and you use them to make (hopefully) more accurate probability statements including updating your original assumptions. By iterating this process, errors in your original assumptions should be corrected.
I can't think of a good example for this, so please forgive me for staying abstract at first - but imagine you have a true dichotomy: x and y. Classically, If we run a hypothesis test and determine that given x, our observations are improbable, we should reject x. Bayes says we should consider whether or not y is impossible first. You see, improbable things sometimes happen. Maybe x is true, and we just got unlucky with our sampling. If y is completely impossible, it doesn't matter how improbable our observations are, we should still not reject hypothesis x. Now, this is extreme - a y that is possible, but less probable than making our observations given x would still lead to not rejecting x. Also, we can consider cases were there isn't a true dichotomy and so maybe we have to consider the prior probability of z. That complicates things, but the calculations can be done.
A simple example of how prior probabilities are used in the real world:
Let's say you are randomly screened for a very rare disease. It occurs in 1 person in a million. The test claims to be 99.9% accurate. You test positive. A good doctor isn't overly concerned and tells you that more tests are needed. Why? Aren't your odds 999 to 1? 1 in a million is your prior probability since that is the occurrence in the population and you were screened at random. 99.9 percent of the time, someone who doesn't have the disease will test positive. That's what the accuracy means. To make the math a little easier, let's say it never tells you that you don't have the disease if you do. So it is 100% accurate in regard to that type of error. Rather than go through the mathematical formulas, it is easier to think about what you would expect to happen if 1 million people were tested. One of them will have the disease and will test positive. 999,999 people (from here on taken as 1 million for simplicity) don't have it and 0.1%, or 1000. Now, we have 1001 positive tests, only 1 of which is a true result. So, rather than 999 to 1 odds in favor of you having the disease, the odds are 1000 to 1 in favor of you not having it. Depending on whether the test errors are random or based on some confounding factor particular to some patients, your doctor will re-run the same test or run a different test that looks for the disease in a different way. Now, in interpreting the next round of test results, your prior probability is not 1 in a million, but roughly 1 in 1000. If the next test comes back positive, it is time to be concerned because your probability of actually having the disease is now 0.5. - still worth another test, but also time to start talking about treatment options and likely outcomes, etc. Indeed, depending on the side effects, it may be reasonable to go ahead and start treatment without further testing.
To tie that into the earlier example - this case is a true dichotomy, but having the disease isn't impossible, only improbable. Hence, observing a positive test result given the null assumption that you are healthy isn't probable - but it is more probable than you actually having the disease.
A lot of people don't like this approach because you are not just admitting that you have a biased assumption going in, but actually deliberately using that in your calculations. The general approach in science is to try to not take biases into the lab with you. These criticisms are good, but I'm not sure how someone could dismiss this as not being useful. It is important to be aware of the limitations and the best practice is to use an array of starting probabilities to make sure that your results aren't biased by the starting values.