Tuesday, February 18, 2014

Ken Ham and Evidence

A couple weeks ago Bill Nye and Ken Ham had a debate about creationism. Of course, nearly every point Ham made was factually inaccurate. But beyond having the simple facts wrong, Ham also had a fundamental misunderstanding about the nature of evidence.

This was highlighted most clearly in Ham's answer to the question "What could change your mind?". His answer was, summarized, "Nothing". Now, this answer in and of itself is a serious strike against Ham's rationality. The point of changing your mind is to make your beliefs more accurate. If you don't even admit the possibility of changing your mind, you're saying that your beliefs cannot possibly be wrong, which is, well, arrogant to say the least.

But it's worse than just that. Ham also said that the Bible makes testable predictions. Alone, that's not a bad thing, quite the opposite. But when combined with his statement that nothing could change his mind, it shows that he doesn't understand the point of predictions. The point of predictions is to provide evidence, but the thing is, that evidence can go either way.

If you make a prediction and perform a test, and the results of the test match the prediction, then that is evidence that supports your beliefs. On the other hand, if you make a prediction, and perform a test, and the results don't match the prediction, then that is evidence that opposes your beliefs. If there is no such outcome that would go against your prediction, then it's not a prediction at all.

For example, suppose I held a rock, and based on my beliefs about the material it's made of, and the laws of physics, I predicted that when I let it go, it would fall down. Then, if I dropped it, and it fell down, that would be evidence in support of my beliefs. But, if I dropped it, and it didn't fall down, either falling up, or hovering in place, or anything else, that would be evidence opposing my beliefs. If I had predicted that when I let it go, either it would fall down, or it wouldn't, then I haven't actually made a prediction. I haven't, in any way, specified what the result of a test would be, which means no outcome can oppose my beliefs, but no outcome can support them either.

If Ham thinks the Bible really does make predictions, then those predictions failing to come true should change his mind. If nothing could change his mind, then his predictions can't actually provide evidence.

Monday, January 20, 2014

Folk Morality

Folk science is pre-scientific ideas about how the world works. For example, a common idea in folk physics is that an object in motion requires a constant force to stay in motion, and if the force stops being applied, the object will soon come to a halt.

Now, folk science is not always wrong. It actually tends to be very good at predicting what happens in everyday circumstances. If you're pushing a cart, the carts stops moving when you stop pushing. It's when you leave everyday circumstances that folk science fails.

I think the same idea applies to morality. Folk morality is what people generally use when making moral decisions. It doesn't have any kind of rigor or theory behind it, but in everyday circumstances, it works alright. Don't lie, don't steal, don't kill.

The biggest problem with this idea is that folk science is based on things that can be directly observed. Folk morality doesn't seem to be. As a result, it's much more prone to differ between cultures and eras. For example, two hundred years ago, slavery was common and accepted, but it isn't today.

Friday, December 13, 2013

The Spherical Cows of Economics

Some conservative oppose all regulations by appealing to the free market. They say the free market is the best way of distributing goods, and any regulation makes it less free. Therefore regulations make the market less efficient, which is bad.

However, the free market is only efficient under certain conditions. For example, everyone involved has to have perfect information about the costs and benefits of their decisions. Another is that there must be no externalities, that is, all the costs and benefits of a decision must be borne by the people making the decision, not by third party bystanders.

The thing is, these conditions rarely, if ever, actually hold. They're the spherical cows of economics. And the free market is inefficient to the extent that these conditions are false.

But government regulations can help improve these conditions, and thus make the market more efficient. For example, by requiring drug manufacturers to disclose their drugs' side effects, they can help lessen information imbalance, allowing people to make more rational decisions. Further, they can use taxes and subsidies to internalize the costs and benefits of externalities.

So regulations do not make the free market less free. In fact, the free market needs regulations to be free.

Friday, December 6, 2013

De Morgan's Law and Duality

Before, I gave the challenge of replicating the boolean OR function using only NANDs (and replicating AND using only NORs). I'm going to show you the solution, using truth tables. A truth table is a table where you list out every possible combination of values of the inputs, and the corresponding value of the output. Here's a truth table showing the AND, OR, NAND and NOR functions.

You can show that two functions are equivalent by showing that they have the same output as each other for every possible combination of inputs. Here's a simple example to show that ¬X = X NAND X.


If you're not sure what one of the outputs should be, plug in the input values and evaluate it. F NAND F = T, and T NAND T = F, which you can see from the first truth table.

So, here's the solution to the first part of the challenge: X∨Y = (X NAND X) NAND (Y NAND). Here's the truth table.


Try to figure out the second part of the challenge (That is replicate the AND function using only NOR) using a truth table now. I'll wait.









The solution to the second part is X∧Y = (X NOR X) NOR (Y NOR Y). Interestingly, it's exactly the same as the first part, except with ∨ replaced by ∧ and NAND replaced by NOR. I'll talk about that more in a bit, but first notice that since (X NAND X) = ¬X, the solution to the first part can be rewritten as X∨Y = ¬X NAND ¬Y. Also, since (X NAND Y) = ¬(X∧Y), it can be further changed to X∨Y = ¬(¬X∧¬Y). The same way, the solution to the second part can be rewritten as X∧Y = ¬(¬X∨¬Y).

This is an example of De Morgan's Law, which says that ¬(X∧Y) = ¬X∨¬Y and ¬(X∨Y) = ¬X∧¬Y. De Morgan's Law is a very important thing to remember when doing Boolean algebra. It's a useful way of simplifying expressions, and it's vital for programmers to know.

De Morgan's Law also ties back into the other point I made. Notice that the two forms of De Morgan's Law are the same, except with ∧ and ∨ swapped? In fact, you can take any boolean expression, swap ∧ and ∨, and swap T and F, and the expression will mean the same thing.* For example, X∧T=X, swapped X∨F=X. Also, X∧F=F, swapped X∨T=T. This property is called duality.

Why does this happen? Keep in mind that the labels we use are arbitrary. It doesn't matter if we use T and F or 1 and 0, or if we use ∧ and ∨ or AND and OR. What matters is the relationships that hold between the symbols. And the way we've defined them, the relationships between T and ∧ are exactly the same as the relationships between F and ∨. That is, X AND Y is true if and only if both X and Y are true. X OR Y is false if and only if both X and Y are false. Those say exactly the same things, just with the names changed.

*If the expression contains XOR, or other functions besides ∧, ∨ and ¬, those need to be rewritten to use only ∧, ∨ and ¬. Or they can be swapped with their own dual, for example, the dual of XOR is XNOR. ¬ is it's own dual.

Monday, December 2, 2013

What Goals Should You Have?

Last time, I ended with the question, "What goals should we have?". Before, I said that "should" only makes sense in reference to goals, so how can this question be answered?

Obviously, using a goal to justify itself is circular reasoning. But you could justify a goal using other goals. If a goal helps you achieve your other goals, you should have it, in the same way you should do anything else that furthers your goals. Conversely, if a goal hinders your other goals, you shouldn't have it, in the same way you shouldn't do anything else that hinders your goals.

But then, what about the other goals? How do you determine whether or not you should have them? In the same way, referring to each other. This will form an infinite regress of self-reference, but that's not necessarily insurmountable. It could probably be represented similarly to Google's PageRank algorithm, which determines the importance of a website based on the number and importance of websites that link to it.

But doesn't that end up being just as circular as before? Well... Yes. And given two or more sets of goals which support each other equally well, and the unlimited ability to modify your goals, I don't know how you could determine which set of goals you "should" adopt. For that matter, I don't know what "should" means in that context.

But as it happens, I don't think we do have the unlimited ability to modify our goals. I think our goals are at least partially constrained by our biological nature. We need to eat. You can deliberately refrain from doing so, but I think that's more acting on a conflicting goal rather than not having the goal to eat.

And if that is the case, then the goals you can't change give you a starting point to base the goals you can change around.

Thursday, November 21, 2013

Boolean Algebra

You may have heard that computers only use the numbers 0 and 1. It's true. Everything computers do is done using electronic switches, and those switches can only be either on or off. (That's not strictly necessary. It's possible to make an electronic switch that has in-between states, but on/off is the simplest and easiest way to make a switch.) But how can computers calculate larger numbers or do all the cool things computers do, if they can only use 0 and 1? Well, one of the fundamental ideas behind how computers work is Boolean algebra.

Boolean algebra is a type of math that only uses two values. These two values are usually called true and false, or 1 and 0. There are three basic operations of Boolean algebra.

NOT X, also written as ¬X is simply the opposite of X. If X is true, ¬X is false. If X is false ¬X is true. Unsurprisingly, ¬¬X=X.

X AND Y, also written as X∧Y is true if both X and Y are true. Otherwise, it's false. The AND operation is also sometimes written as multiplication, because it works the same way. 0*0=0, 0*1=0 and 1*1=1. And just as with multiplication X*0=0 and X*1=X, no matter what X is, X∧F=F and X∧T=X, no matter what X is.

X OR Y, also written as X∨Y is true if X is true, Y is true or both are true. The OR function is sometimes written as addition. Similar to normal addition, 0+0=0, 0+1=1 and 1+1=... Well, there's no 2, so 1+1=1. This has the properties that X∨F=X, just like X+0=X, and X∨T=T, unlike normal addition.

You can put these operations together to get more complicated operations. For example X XOR Y, also written as X⊕Y, is true if X is true, or Y is true, but not both. It can be built out of the other operations like this (X∨Y)∧¬(X∧Y). Similarly, NAND, NOR and XNOR are made by putting a NOT in front of an AND, OR and XOR, respectively.

Although I said there are three basic operations of Boolean algebra, there really only needs to be one. You can make every single Boolean operation out of just NAND or just NOR. For example, ¬X = X NAND X. X∧Y = ¬(X NAND Y) = (X NAND Y) NAND (X NAND Y).

Here's a challenge for you: How can you make the OR operation, using or NAND, or complementarily, how can you make AND using only NOR?

Wednesday, November 13, 2013

The Fundamental Question of Morality

The more I think about my last post, the more I think that "What should we do?" is the most fundamental question of morality. Should you kill one person to save five? Should you ever lie under any circumstance? It seems all moral dilemmas boil down to questions of action and decision.

But this alone doesn't really clarify matters much. It just pushes the ambiguity onto the word "should". What does it mean to say that you "should" do something?

Well, I came up with a partial answer here. As far as I can tell, for a "should" question to make sense, a goal is required. And I think in practice, whenever you make a decision, you do so for reasons, which can be described as goals.

But morality isn't just about how to achieve your goals. I think most people would say it's immoral for a sociopath to kill, even if that's his goal. In fact, I think most people would say it's immoral for someone to have such a goal.

So, maybe the most fundamental question of morality isn't "What should we do?" but rather, "What goals should we have?". But I just said "should" requires goals. How can you say what goals you should have without referring to goals? Is it even possible? If not, how can you answer the question?