Friday, July 18, 2008

Oooooooooklahoma

This one you just have to see to believe.
Oklahoma County, Oklahoma Commissioner Brent Rinehart is facing a tough reelection campaign. He's been accused of abusing his office for personal gain, and will go on trial in the fall on felony campaign finance charges. But apparently, this is all a conspiracy of homosexuals, liberal do gooders, and good ol' boys to force Rinehart out of office. Rinehart lays out his case in a comic book he's sending out to voters, which—you may be surprised to learn—he wrote himself.

I went ahead and compiled every page of the comic into one very big image, available beneath the fold.


Read more.

Wednesday, July 16, 2008

And now for something depressing

The NYT asks readers what we should do to stimulate the economy. Most of the responses are predictably short-sighted and ignorant; the very first one takes the cake:
Triple the minimum wage.

That would bring it more in line with increases in efficiency and rates in the late 70s. People make more, they spend more. All the money is just tied up in investments now, like bonds in Fannie and Freddie.
Read more.

Tuesday, July 15, 2008

I'll link pretty much anything pertaining to Slimes

Read more.

It's called a road

Read more.

What it means to be a "skeptic"

Why all the posts on Bayes? As I said last time, a statement has to concentrate probability mass to be meaningful. The other day I got in a discussion with someone about global warming and the theory of evolution, respectively. Both times, he claimed to be a "skeptic." But I don't think that word means what he thinks it means. In common parlance, it's often used to mean, "I disbelieve whatever the mainstream view is." So you'll see very silly people commenting on blogs and Youtube videos, reassuring the reader, "I'm the most skeptical person in the world, but. . . you should really look into the 9/11 truth movement." And so on.

But I caught myself producing a meaningless thought, or at least it was meaningless until I thought of exactly how to delineate the two: "That's not skepticism, that's denialism."

So, how to differentiate them? 9/11 truthers who claim to be valid skeptics, it seems to me, are using the word to mean that they are open to any and all points of view. I use skeptic to mean that one is open to any and all points of view, but assigns much higher probability to those hypotheses with more evidence. Saying, "I'm uncertain" isn't enough. If you're uncertain about something for which the evidence shows to be orders of magnitude more reasonable than any other hypothesis, then you're no longer a skeptic, you're a denialist. Though the vast number of priors are false, most can be updated through proper Bayesian inference. But lo, priors of 0 and 1 screw things up royally. Welcome to the human race. Read more.

The secret to science

I ran across this entry, the other day, in which Gene Callahan takes issue with Bayesian inference, as described in the last post:
the basic idea is that you "start out" by assigning some "prior probabilities" to various theories about some phenomenon, or outcomes of some event, and then multiply those "priors" by a factor based on how much more or less likely new evidence makes the prior.

For instance, you are a late 19th century physicist, and you are evaluating how likely it is the Newtonian mechanics is the true description of matter in motion. At that time, there would have been physicists who would assign p=1 to it being true, and p=0 to it being false. At the very least, many physicists would have assigned p=0 to something as weird as quantum mechanics being true!

Now, as the years pass, you are presented with startling new evidence about black body radiation, the photoelectric effect, and so on, and with a startling new theory in addition. According to the theory of Bayesian updating, the "rational" response is just to think you must be delusional in believing you have heard this new data! You had assigned an alternative theory a prior of 0, and now no factor the new evidence recommends multiplying that prior by can ever change that initial assignment of p=0.

Of course, that is not what real scientists did at all. Instead, they assigned whole new "priors" -- they thought, "Mon Dieu, I had never considered the possibility of this theory or this evidence, and therefore I was in a state of 'radical uncertainty,' and ought to re-think everything." But allowing that maneuver thwarts the whole motive for Bayesian updating, which is to turn rational choice between theories into a formal, mechanical procedure.

I have my own response to this, which I'll get to in a bit, but I noticed Callahan also leaves this message in the comments for someone who disagrees with him:
Oh, and anonymous, it's really best you leave this sort of thing to the experts, OK?

It was a response to something the anonymous poster said to the same effect, but in a far more arrogant context. So, yes, let's do exactly that. Let's find out what E.T. Jaynes, the expert of Bayesian probability, had to say, discussing the case of scientific experiments appearing to validate ESP:
[An ESP researcher] will then react with anger and dismay when, in spite of what he considers this overhwelming evidence, we persist in not believing in ESP. Why are we, as he sees it, so perversely illogical and unscientific?

The trouble is that the above calculations represent a very naïve application of probability theory, in that they consider only Hp and Hf; and no other hypotheses. If we really knew that Hp and Hf were the only possible ways the data (or more precisely, the observable report of the experiment and data) could be generated, then the conclusions that follow from [the above equations] would be perfectly all right. But in the real world, our intuition is taking into account some additional possibilities that they ignore.

Probability theory gives us the results of consistent plausible reasoning from the information that was actually used in our calculation. It can lead us wildly astray. . . if we fail to use all the information that our common sense tells us is relevant to the question we are asking. When we are dealing with some extremely implausible hypothesis, recognition of a seemingly trivial alternative possibility can make orders of magnitude difference in the conclusions. . .

There are, perhaps, 100 different deception hypotheses that we could think of and are not too far-fetched to consider, although a single one would suffice to make our point. . .

Introduction of the deception hypotheses has changed the calculation greatly. . . each of the hypotheses is, in my judgment, more likely than Hf, so there is not the remotest possibility that the inequality could ever be satisfied.

Therefore, this kind of experiment can never convince me of the reality of Mrs. Stewart's ESP: not because I assert Pf = 0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me

You can read the chapter for the probability calculations. The point is, there are many inequalities that can arise in Bayes' Theorem applications that look like unreasonable dogma. Sometimes they are and sometimes they are not. Physicists from the 19th century didn't dogmatically believe their theories with 100% probability. They simply took the evidence they had currently available to them and applied it. The fact that they were open to evidence at all means that their priors were not really 1 or 0! If you don't believe it, it's pretty easy to think of groups who do have priors of 1 or 0 for their beliefs. Young Earth Creationists can be presented with all the evidence in the world for an old earth, but most of them will never be convinced. 9/11 truthers can examine the evidence for their grand questions all day, and never arrive at the right answer.

The secret to science is this: you never assign a probability of 0 or 1 to a proposition. Even if you think you're doing so, as long as you're willing to believe something else, your brain is revising that probability you think you have slightly downward.. What follows is that, likewise, human argument is about concentrating probability mass. This is where so many logical fallacies come from, it's why politicians are miserable to listen to, and it's why Smith, from my modified example in the last entry, is a very foolish person for assigning an exactly equal probability to all alternate, non-cigarette hypotheses. If the reverse can't be true, it's not an argument. If a statement doesn't give us something to plug into a Bayesian framework, no matter if it can only vaguely approximate the mathematical calculations, it's effectively meaningless. Read more.

A quick intro to Bayesian inference

This, oh readers of my blog, is Bayes' Theorem, which you should learn well:

In english, it says, "The probability of A given B is equal to the probabilty of B given A times the probability of A, all divided by the probability of B."

But that's still not quite english, so let me put it another way. P{(A|B) and P(B|A) are called "conditional probabilities." P(A) and P(B) are what we call "prior probabilities"--they exist independently of information about each other. So, when you're solving for P(A|B), you're asking, "What's the probability of something given the probability of some other, related thing?"

People incorrectly apply probabilities all the time without considering Bayes' Theorem. You might even catch yourself doing it. Try out a probability problem and see: 10% of the population uses marijuana regularly, and a given drug test is 90% accurate. What's the probability that a randomly selected person who tests positive for marijuana use actually uses marijuana?

In general, people tend to ignore the ever-important prior information that 90% of the population doesn't use marijuana. They tend to think that the probability that our randomly selected person does pot is 90%. It's actually 50%. If you want the math, it works out like this:

P(B|A) = 0.9
P(A) = 0.1
P(B) = P(B|A)*P(A) + P(B|A')*P(A') = 0.9*0.1 + 0.1*0.9 = 0.18

In this case, P(B|A)*P(A) is the probability of a true positive and P(B|A')*P(A') is the probability of a false positive. Adding these two cases gives us the probability of B independent of A, or the odds that the test is positive regardless of whether the person uses marijuana. Now we have

(0.9*0.1)/0.18 = 0.5

So, as you can see, prior information is essential when calculating conditional probabilities. We can shift the probability that this given person smokes pot from 10% to 50%, and no more. This is counterintuitive to a lot of people, but it's strictly true. If this test comes back positive, it can only give you 50% confidence that a person smokes pot.

Now for something a little more controversial--and I'll write a post about the controversy later. Bayes' Theorem can be used to update our beliefs.

Well, maybe you don't think it's that controversial, when you think about the above example. You shift your belief from P=.1 to P=.5, no problem, right? Ah, but what if you don't have a nice, handy prior given to you from the annals of science? What if you don't know that 10% of the population smokes pot? What if you think it's significantly higher? Lower? What if you believe 0% of the population smokes marijuana? Then, according to Bayes' Theorem, even a 100% accurate marijuana test can't convince you otherwise--you simply won't believe the test can measure marijuana use.

But we'll come back to that. For now, a quick example, so you know what I'm talking about in the next entry. I'm going to steal this bit from Against the Modern World:
A stubborn, but rational, man, Smith, thinks it is extremely unlikely that cigarette smoking causes lung cancer. For Smith, say, P(cigs cause cancer) = 0.2. Instead, he licenses only one alternative hypothesis: that severe allergies cause cancer. Since these hypotheses are exhaustive, on pain of inconsistency, Smith must believe P(allergies cause cancer) = 0.8.

Now, suppose Smith's Aunt Liz dies of lung cancer. Furthermore, suppose Aunt Liz has been a heavy smoker her entire life, then P(Liz gets cancer | cigs cause cancer) = 1 (certainty). Suppose, also, that Liz has had minor allergies for most of her life; since these allergies are only minor, let's say the probability she gets cancer under the hypothesis that severe allergies cause cancer is only 0.5.

Briefly, how should we calculate P(E) here? We sum over the weighted possibilities:

P(E) = P(H1)P(E|H1) + P(H2)P(E|H2) = 0.2(1) + 0.8(0.5) = 0.6

So, now we can use Bayes' Rule to calculate Smith's (only consistent) subjective degree of belief in the hypothesis that cigarettes cause cancer given the evidence that Aunt Liz has died of cancer.

P(H=cigs cause cancer) = 0.2
P(E=Liz gets cancer | H=cigs cause cancer) = 1
P(E=Liz gets cancer) = 0.6

Plugging these values into Bayes' Rule we get:

P(H|E) = P(H)P(E|H)/P(E) = 0.2(1) / 0.6 = 1/3


So, in light of this evidence, Smith's belief in the hypothesis that cigarettes cause cancer has increased from 1/5 to 1/3.

Here's something to note: what if Smith didn't assign the remainder of his cancer hypothesis to allergies? What if he assigned it to absolutely anything else?

We'd have gotten the denominator to be

0.2*1 + 0.8*1 = 1. His inference would become (0.2*1)/1 = 0.2. He wouldn't have changed his belief at all.

This, my friends of blogland, is the secret to science. Read more.

Tuesday, July 08, 2008

Vermont and naturopathy

There is a lot wrong with the American health care system. Laws like this are not going to make things any better. Read more.

Wednesday, July 02, 2008

Mike Tyson's Jungle Beat



Someone please record a video of Mike Tyson being knocked out with bongos. Please? Read more.

As if these disciplines don't get compared enough.

Pharyngula writes, on Chris Turney's Ice, Mud and Blood:
This is the [book] you'll be able to hand to climate change denialists, and it's a winner.

It's virtues are the same as his previous book, the careful documentation of exactly how we know what we know, and less dictation of the conclusions. This is useful, because as we all know, climate is a phenomenon that shows a lot of variability, exhibits patterns in its history, and also has large degrees of uncertainty, phenomena that denialists can seize upon to magnify that uncertainty into a basis for an unwarranted rejection of well-supported hypotheses.

That last sentence works pretty well for economics. If there were an easy to read and technical how-we-know book for economics, maybe he wouldn't be so confused as to think of Marx and Smith as merely inflamers of political debate. Read more.