Saturday, December 25, 2010
Capitalism requires scarcity to work. If you have unlimited amounts of something, its supply is infinite, and its value is 0. No one would pay for something they can just effortlessly pick off of a tree. And because you can't make a profit off of it, capitalism will never attempt to end scarcity, and will oppose anyone who does.
And it has. Even though we don't have the technology to end physical scarcity (Well, not completely anyway. If it weren't for stupid political situations, we could easily feed every human on the planet.), there's no reason we can't end digital scarcity. Any kind of digital information is just a string of 1s and 0s and can be copied infinitely, for free. But capitalist forces such as the RIAA oppose that kind of thing tooth and nail.
The only reason that someone who wants it shouldn't be able to get it for free is that the person who made it couldn't get paid. I'm not saying that that person shouldn't get compensated for their effort, but the capitalist model clearly isn't be the most efficient way. And sometimes, people aren't looking for compensation. Look at the open source movement. People make programs and then give them away for free. Hell, that's what I did with AlDraw.
Although the end of physical scarcity seems like science fiction now, I don't see why it would be impossible, and technology like RepRap is gradually taking us in that direction. But if Star Trek style replicators were invented tomorrow, they'd encounter exactly the same resistance that digital copying does today.
Tuesday, December 21, 2010
Earth's tilt physically causes the season, including of course, winter. When the northern (or southern) hemisphere is pointing at the sun, the longer days and more direct sunlight make it warmer. When it points away from the sun, the shorter days and less direct sunlight make it colder. Hence the seasons. The solstices are when the Earth is pointing most directly towards or away from the sun and they mark the transition between days getting shorter and days getting longer.
Earth's tilt is also the cause for the holiday season. Wikipedia lists no fewer than 36 celebrations related to the winter solstice, from cultures all around the world. And it's hardly surprising when you think about it. Even before the discovery/invention of the calendar, I'm sure it was clear to people that longer days were warmer days. Also, when the sun was up determined when you could go hunting and get things done. So the reversal of the shortening day must have been extremely important.
Among that list from Wikipedia are some holidays you might recognize. Saturnalia, Yule, Hanukkah, Christmas. Christmas isn't really any different from the others, and most of our celebrations of it don't have much of anything to do with Christianity. The Christmas tree for example, has roots in a pagan celebration.
Also, the date of Christmas has nothing to do with the story of Jesus's birth in the bible. The bible never gives a date, and it says that the shepherds are in the fields with their flocks, which they would only be during a warmer time of year. Most likely, the early church selected the date of Christmas to match a Roman holiday.
Jesus isn't the reason for the season, Earth's 23.4° axial tilt is.
Wednesday, December 8, 2010
What's the difference?
Chance is merely unpredictability. Things happening for little or no reason. Bad things happening to good people. Winning the lottery, or getting cancer.
Luck is chance that takes sides. Chance that can be swayed by a charm, or a ritual or that's just attracted to some people over others.
It's more complicated in that people can get lucky, but they can't be lucky. Getting lucky just means that, by chance, something fortunate happened to you. You pulled the lever and got the jackpot. But being lucky would mean that you would actually be more likely to get the jackpot than other people who are not lucky.
And now, it's on SourceForge. https://sourceforge.net/projects/aldraw/
AlDraw-src.jar is, as your might have guessed, all the source files. AlDraw.zip is contains an executable jar, among other things. All you need to try it out is java installed on your computer.
I'm still calling it a beta version. I've made significant progress, but there's still a ways to go. Particularly on the new features front. I've added hardly any of the features I've been wanting to. Most of my efforts have gone to the code design and usability goals. And fixing the things I broke while refactoring.
But it's in a workable, usable state now. Go try it out! And if you know Java, take a look at the code and tell me if it still looks like a total mess. It's released under GNU GPLv3 so you can copy, modify and redistribute it nearly however you like.
Monday, November 1, 2010
This also applies to philosophy. It's hard to have a deep philosophical discussion when you use a word to mean one thing and someone else uses the same word to mean something different. So, here's my small contribution. My take on what the word "belief" should mean in a philosophical context.
A belief is something that is held to be true.
Not something that is held to be true without evidence. That's faith. It's common in ordinary speech to use the two interchangeably. But I feel when precision matters, these definitions are the best readily available.
The biggest reason for that is that frequently, it doesn't matter why something is believed, only that it is believed. In such a situation, a more general definition is better. As far as I know, there's no better word for "something held to be true" than belief. Faith, trust, knowledge, etc. are similar but all more specific. Belief is good for it's generality.
Another reason is that it's not always so cut and dried if a belief is faith, trust, etc. What if the belief is held for a reason, but a bad one? What if there is evidence supporting the belief, but more evidence against it? Is it faith, is it trust? Well, either way, it's still a belief.
Friday, October 22, 2010
Either time had a beginning, or time stretches back into infinity. Neither one makes sense to me.
There's an argument that time can't be infinite, which I disagree with. I don't think it's a logical impossibility, but intuitively, it makes no sense to me. I mean, it can't go forever. At the same time, time having a beginning makes even less intuitive sense. What came before the beginning of time? It may not be a valid question, but intuitively, it doesn't seem to make sense that it's an invalid question.
I think this is a situation where our savannah-evolved, monkey intuition fails us. It works very well with situations we deal with commonly, like where an object will hit if we throw it just so. But outside that realm, it simply doesn't work. Intuitively, time doesn't slow down, and length doesn't contract for an object moving quickly, because it's not perceptible at the speeds we move at.
I think the beginning of time is the thing. It doesn't seem intuitive because it's completely out of our experience. We can't rely on our intuition or our gut-instinct, because they can't handle this situation. Instead, we have to rely on science and math.
Thursday, September 30, 2010
Doubtlessly, you believe at least one of these things. Which is why I support Blasphemy Day. Because something you believe is blasphemy to someone. The price you pay for being able to express it is that others can express something that you may disagree with or find personally offensive. To oppose that is to oppose the very concept of free speech.
And so here's another piece of blasphemy: Faith is a bad thing.
In this context, by faith, I mean belief without reason. The word is sometimes used to mean something more like trust. A common arguing technique by some believers is to deliberately confuse the two separate meanings. ("You have faith that your brakes will work, which is the same as my faith in god.") Let me be clear, I am talking about belief without reason. If you have a reason for believing something, it's not faith.
Why is faith a bad thing? Because it's fundamentally irrational. You can't draw a map without first looking at the territory. Something you believe by faith might be, by pure chance, correct. But given all the different mutually exclusive things you could believe by faith, that chance isn't very good. If you're actually interested in having your beliefs be correct, faith is a very bad approach to take.
Now, evidence can be misleading. People believe things that are incorrect, but with backing evidence all the time. For example, scientists at the end of the nineteenth century believed in luminiferous aether and with good reason. Light's a wave, it has to travel through something, right? But by always evaluating new evidence, you can correct your mistakes and make your beliefs closer to the truth. With faith, there is no such recourse.
Faith is a bad thing, but what's even worse than faith is the belief that faith is a good thing. Human irrationality is a bad thing, but it's an inevitable thing. We are not rational beings by nature, and a certain amount of irrationality is to be expected. But it is not to be desired. We must strive to be as rational as possible, because that is the only way we can progress. It is the only way to correct our mistakes and fix them.
Promoting faith as a paramount good is perhaps one of the most evil things religion has ever done (well, ok, crusades, pogroms, fatwas, etc. are pretty bad too).
Wednesday, September 29, 2010
I don't like war. In fact, I hate it. I rarely declare war on others, and when others force me, I'm usually only trying to make them stop. That was what I was thinking when you took over Rostov. I would take it back, and give you whatever gold or resources you wanted to stop the invasion.
But you didn't stop there. No, you couldn't. You burned Rostov to the fucking ground, killing every last man, woman and child. Not only that, you conquered Brussels, my longest and closest ally. Russia protects her smaller allies, and for these atrocities, we shall not rest. There shall be no peace as long as we both exist.
You may have legions far larger than mine. But I am generations ahead of your technology. Your piddly little janissaries will scream as they are crushed under the treads of my tanks, and I am, as we speak researching atomic theory. I am not afraid of nuclear winter, and you will be able to offer no counter-attack, or mutually assured destruction.
Mark my words: I will not rest until each and every one of your cities is a ruined, radioactive crater, marring the face of this planet.
Monday, September 20, 2010
And this weekend, I came up with the perfect solution. A Markov chain random text generator. A Markov chain is a random process in which the next state is dependent only on the current state. In the case of a Markov chain random text generator, the current state is the last n characters (or words) generated. My program works with characters, but the concept applies exactly the same to words.
A Markov chain text generator first reads some sample text. From this it builds kind of probability table of which letters follow which other letters more or less frequently. Then it generates the letters (considering only the last n letters generated), weighting them so the more common letters and letter combinations occur more frequently.
My particular generator is slightly different than most others because it knows how to begin and end. Most other Markov text generators are used to make large blocks of text based off of entire books. Mine is to be used for short names. The beginning and end matter more, and it's good to have a name that ends in a logical way such as "ton" or "ville" instead of just being cut off in the middle. So I treat the end of the name as a character the same as the rest. It gets put in the chart the same as other and it gets generated the same as others, except that when it's generated, the name is done and it stops generating more.
Here's an example of how it works, using a name it actually generated: San Francinattle. For early testing, I used a list of the 100 largest cities in the US, including San Francisco, Cincinatti and Seattle. I'll only consider those three for simplicity. More names affects the probabilities involved, but not the principles. In this example, I'm using order = 3.
For the first letter, two names have "S" and one has "C", so it has a 66.7% chance of generating "S". With just "S" it has a 50-50 shot of producing "e" or "a" next. With our limited dataset, once "Sa" is generated, the only next option is 'n'. That continues until we've generated "San Franci". The order is 3, so it only considers "nci" when generating the next letter. That letter combination was in Cincinatti, in addition to San Francisco. It doesn't care about what came before this, so it has a 50-50 chance of generating an "s" or an "n", and in this case it went with "n". Now with this combination of letters, Cincinatti is the only game in town, so it has to stick with it for the next four letters, but when the last generated letters are "att" then it matches Seattle. At which point, it finishes up the city name.
And so, a brand new city name, inspired by other city names, and following English spelling convention.
After some successful test runs, I compiled a list of over 19,000 US city names (information provided by your friendly neighborhood census bureau (and a little bit of programming to strip out unnecessary parts)).
Here are some of the names generated with order = 1:
Orrdak LansonOrder = 2:
Mior Juengsperes Beleckbumonty
War Hilahe Crgeyrtosn
St. CimandsonisondOrder = 3:
TonkeenOrder = 4:
South Burleve Porth
Hess Hillage Placket
WashtonIt's amazing how few orders are between unpronounceable gobbledygook and reproducing sample names exactly. Orders 2 and 3 seem pretty good to me though. It produces names which are easy to imagine could be real, but probably aren't.
I also got a list of about 2,000 German cities. Here are some names generated using those (order = 3):
Tuesday, September 14, 2010
Well, it's pretty obvious that redundancy isn't always a bad thing. Leaving the world of computer science, and looking at engineering, redundancy is frequently a good thing. To take an extreme case, life support systems on spacecraft are multiply redundant. Which is good, because life is awfully fragile in low-Earth orbit. Even here on Earth, redundancy in engineering tends to be a good thing. It has the downside of costing a more, but has the advantage of preventing one thing going wrong from destroying everything.
But redundancy in engineering has little (if anything) to do with redundancy in programming. So, let's look at something a little more information based -- linguistics. Language is chock-full of redundancy. Here's a simple example: "I run", "He runs". What's with that extra s? We know who the sentence is talking about from the pronoun. Why bother with noun-verb agreement at all? Because the world is a noisy place. There's always some amount of background noise around. Frequently, people talk to each other in the middle of crowd, where everyone else is talking too, which is a pretty incredible feat if you think about it. And in a noisy environment, some amount of spoken information is going to be lost in the background. And a little bit of redundancy can help you make sure you actually heard what you thought you heard. This form of redundancy still has its cost: it takes longer to get a complete message across.
And after this cross-discipline trek, I'll finally step back into programming, from human languages, to programming languages. I've been learning some Groovy for work. Groovy is a language that's built on top of Java. It does everything Java does and adds in some cool features of its own. One of the things it does, and part of its core philosophy, is to remove Java's unnecessary and redundant fluff. For example, Java requires a semicolon at the end of every statement. But most of the time, a single statement is on a single line. So, Groovy lets you use a new line to end the statement. You can still use the semicolon if you want, but it's not necessary. It's redundant. Stuff like that is all over the place. It makes the code shorter, but it also makes it (at least for someone new to Groovy) more difficult to understand. Finding a method's return type, for example, is no longer, necessarily, simply looking at the method's declaration. It's still unambiguous, but it's harder to find.
And not only is it harder for a human, it makes mistakes more difficult for the compiler to catch mistakes. If you have information stored in two places, and you change one (intentionally or accidentally), the compiler can alert you to the inconsistency. If you changed it intentionally, you'll be reminded to change the other. If you changed it accidentally, you'll be reminded to fix it. If the information is stored in only one place, the compiler has no way of checking if you really meant the change. Here's an example with Python (since I haven't been using Groovy enough to encounter a good example of this yet). In Python, functions can be treated just like any other variable. I was writing a program in which I wanted to get the result of one function (which took no arguments) and then pass that to another function. Simple code like this: x = funcA; funcB(x); See the problem? It should have looked like this: x = funcA(); funcB(x); Those parentheses after funcA make a big difference. Without them, funcA itself is passed to funcB instead of the result of funcA. If you consider a theoretically ideal language which has absolutely no redundancy (brainfuck comes close), then any arbitrary string would compile and run. Which means if you make a single typo, it will still work, it just won't do what you want it to do.
I was gonna talk more about other forms of redundancy in programming. Higher level stuff in the overall design of the program rather than the nuts and bolts of the language. But this post is plenty long enough as it is, so I'll wrap up with conclusions now. Is redundancy a bad thing? Not necessarily. The advantages can outweigh the disadvantages. But there are always disadvantages. In most of my examples, the costs were pretty small compared to the benefits. But, especially in the higher level, more abstract stuff, the costs can be significant. So, I'll bring up another coding principle: Code by intention. If you're going to do something redundant, do it for a good reason. Do it intentionally.
Wednesday, September 1, 2010
There are those who say that solstices and equinoxes mark the beginning of each season. In which case we're still three weeks away from autumn. Others say that solstices and equinoxes mark the midpoint of each season.
Nonsense, I say! Solstices and equinoxes have to do with the alignment of Earth's tilt to the Sun. Seasons have to do with the weather and the temperature. The two are related, but not the same.
If you say solstices mark the beginning of seasons, then a week before Christmas isn't winter yet, which seems kind of ridiculous. If you say equinoxes mark the midpoint, then Valentine's Day is already spring, which is even more ridiculous.
And more importantly, there is no first day of any season. As my brother and I were discussing on Facebook, everything is continuous, and seasons are a prime example of that. It's not like there is one single day where the temperature drops from 30°C to 20°C and all the trees change color. It's a gradual transition, like red turning to orange in a rainbow. Any line of demarcation is going to have to be arbitrary.
And since it has to be arbitrary, it might as well line up with another, well established arbitrary date-point. In this case, the first of the month.
And so, I decree: The first day of autumn is September 1st. By extension, the first day of winter is December 1st, the first day of spring is March 1st and the first day of summer is June 1st. And it lines up much better with the weather that way.
Tuesday, August 17, 2010
I've been working on improving it. I've decided I want to do a major overhaul, and make it version 2.0.
There are three big things I want to focus on:
- Improving code design. -- When I first wrote it, I didn't really give any thought or attention to making it good code and easily maintainable and extensible. As I gradually expanded it, I rewrote the worst parts to make it more flexible. But for the scale of changes I want, I don't think that's going to be enough. Also, I want to be able to show the code to another programmer and not have them think "What the hell were you doing?"
- Improving usability. -- This is probably the worst thing about the program right now, because I'm the only one who uses it. There are a lot of things about the program that make perfect sense to me, but wouldn't to anyone who just started using it, because they didn't write it, or get used to its shortcomings as they worked to fix them. If anyone is willing to help test it, let me know.
- Add features. -- Copy and paste to make repeated designs like tiling easier. Make zooming and panning more fluid. Add colors. Pretty straightforward.
Sunday, August 15, 2010
I'll let a Greek philosopher (probably not actually Epicurus, though usually attributed to him) make the connection.
Is God willing to prevent evil, but not able?One of the most common solutions to the problem of evil is to say God has "mysterious ways" and these acts serve some greater purpose. But God is supposed to be omnipotent. Not just really, really powerful - all-powerful. If he were all-powerful, he could achieve the same purposes without the killing and destruction.
Then he is not omnipotent.
Is he able, but not willing?
Then he is malevolent.
Is he both able and willing?
Then whence cometh evil?
Is he neither able nor willing?
Then why call him God?
Some people say that God can't be morally judged the same as humans, but I don't see why not. Whether something is moral or not isn't determined by the actor's power or knowledge. (Note to self: write a post going into more detail about this.)
One of the most... interesting... solutions to this problem that I've seen is that natural disasters are caused by human evil and sinfulness and for God to prevent them would interfere with our freewill. Of course, there was no explanation of how humans cause natural disasters, or how God would be interfering with freewill by eliminating them.
And then of course, there is the solution that there is no such entity that is omnipotent, omniscient and omnibenevolent. Personally, I find that to be the most parsimonious solution.
Monday, August 9, 2010
I know you won't believe me, but the highest form of Human Excellence is to question oneself and others.
In my previous post I talked about why truth is a good thing. Continuing that, questioning is also good. Because questioning is one of, and perhaps the single most important tool in finding the truth.
Not everything you believe is true. Not everything I believe is true. In order to discard our false beliefs, we must first find them. And the only way to do that is to question our beliefs.
Consider the modern fable of the five monkeys. The monkeys continued believing that it was a bad thing to try to get the bananas, even though conditions had changed. Just because something was true doesn't mean it still is.
This is why I so strongly support the freedom of speech and more generally the freedom of belief. The only way for the truth to prevail is for it to be critically examined.
Wednesday, July 28, 2010
It may sound obvious. That's what we've been taught since childhood. Lying is bad; honesty is good. But our parents and teachers are not infallible. They could have been wrong. Lying and bullshitting certainly seems quite prevalent in the behavior of those at the top.
Also, a major component of philosophy is to check your assumptions. Wrong assumptions lead to wrong conclusions, which is bad philosophy. So, are we sure that the truth is a good thing?
My answer is yes. First for practicality. If you're pursuing another goal, the truth will only help you get there. If you're trying to make people happy, you need to know what will actually make them happy. If you do something that you believe will make people happy, but are mistaken, you will achieve the exact opposite of your goal. And this is true of any goal. Even if your goal is to dissemble and mislead, you'll be able to do it better if you know the truth.
But beyond that, I feel that truth is good in and of itself. I can't really articulate why. It's a non-rational preference, the same way preferring pleasure to pain or happiness to unhappiness is non-rational. And I feel it's a very important preference, of the magnitude of pleasure or happiness.
Wednesday, June 30, 2010
Any good programmer practices this rule, to a limited extent. Calling a function instead of rewriting the same block of code over and over again follows the DRY Principle. But Hunt and Thomas suggest taking it even further, further than I would have thought practical.
Most people take DRY to mean you shouldn't duplicate code. That's not its intention. The idea behind DRY is far grander than that.They say to do that by using code generators, automated scripts and other such tools.
DRY says that every piece of system knowledge should have one authoritative, unambiguous representation. Every piece of knowledge in the development of something should have a single representation. A system's knowledge is far broader than just its code. It refers to database schemas, test plans, the build system, even documentation.
But I think this principle can be good outside of computer contexts. I think that it could be applied to the government, at least in some situations, to make it more efficient.
What brings this to mind is changing my address. You shouldn't have to change your address twice (once for the Post Office, once for the BMV). You should be able to change your address once, in one place and have that communicated to other relevant agencies. This would make the system somewhat more complex, but also more efficient and more consistent.
I'm sure there are other bureaucracies and other real life situations this would apply to.
Saturday, June 12, 2010
But it's not an unreasonable question. Why would you spend so much time talking about something that you don't think exists? I can't speak for anyone else, but here are my reasons.
- It's interesting. Just because something doesn't exist doesn't mean it's not interesting to think about. I also like talking about elves and alternate histories.
- I want to know the truth. If a god does exist, I want to know about that, and I won't find out by sticking my fingers in my ears.
- Finally, and most importantly, because people use their belief in god to support teaching creationism, banning abortion, bashing gays, mandatory public school prayers, and many other flagrant infringements on rights.
Wednesday, May 5, 2010
Anyway, a few years ago, I came across existentialism, and it resonated with me. I'm sure I had heard of it before then, but for whatever reason, it didn't stick. But this time, I looked into it, decided that I rather liked it, and so started considering myself an existentialist. I particularly like the idea that we create our own meanings. Meaning doesn't exist independent of us, waiting for us to find it, rather we create it.
Anyway, I needed to take an elective course, and Intro to Existentialism fit, so I took it. We went over four philosophers - Kierkegaard, Nietzsche, Heidegger and Sartre, and read The Grand Inquisitor by Dostoyevsky, The Metamorphosis by Kafka and saw the movie District 9.
After about a month into the class, I knew less about existentialism that I did before. Being slightly wiser, realizing how little I knew, I decided I should stop considering myself an existentialist until I knew more about it, and so could make an informed decision.
So, being done with the class, I gave it some thought. The first thing is that I disagree with most of what was said by most of the philosophers we studied. Interestingly, I disagreed with each philosopher a little bit less than the last. If I don't agree with what most existentialist said, how can I be an existentialist? Well, first it's important to remember that most existentialists didn't consider themselves existentialists. Hell, Kierkegaard and Nietzsche predated the term.
Further, if there was only one idea that all four philosophers shared (and there were damned few things all four philosophers agreed on), it was focusing on the individual over the group. So, I can disagree with them, and chalk it up to individualism. Seems appropriate. Also, the ideas I found appealing in the first place, I still find appealing, and while they are less central to the philosophy than I thought they were, they're still there.
And as a special bonus, here's a joke about existentialism. Don't forget to read the mouseover text.
Wednesday, April 14, 2010
Sunday, April 11, 2010
Here's my (atheistic) take on it: God is like Batman. I don't mean that he's a superhero who fights crime under the cover of darkness (though that would be pretty cool). What I mean is that the different versions of Batman are like the different versions of God. There's the Batman of the comics, the Batman of the early Batman movies and the Batman of the rebooted Christian Bale Batman movies.
In each version, certain important characteristics stay the same. Batman is always dresses up as a bat to fight crime, has a batmobile and a batcave, etc. God is always the omnipotent creator of the universe, who personally cares about humans.
But also, important characteristics change. Who killed Batman's parents and where did he learn to be an expert in hand-to-hand combat? Who was God's most recent prophet and does he care if we eat pork?
So, is God the same God? Well, in the most important aspects, yeah, pretty much. But, that doesn't mean they're interchangeable regardless of context. You can't talk about why Michael Keaton's Batman never mentions Ra's al Ghul because it's not the same continuity. Similarly, you can't ask why the Christian God allows pork when the Jewish God prohibits it.
Of course, this is approaching God as a purely fictional literary character. A real entity can't be three different versions at once. Wait a minute...
(As a sidenote, I find it somewhat interesting that Aphrodite is another god(dess) who rather different versions of herself.)
Monday, March 29, 2010
is the worst form of government, except all those other forms that have been tried.
Capitalism works well - very well - in certain controlled circumstances. When a resource is scarce, but not too scarce. When there are few barriers to entry. When there is strict regulation preventing monopolies from forming and other predatory business practices. When there's not too much unemployment, but not too little. When there's an increasing population. When income inequality is not too high, but also not too low.
If all these conditions, and others I haven't named, are met, capitalism is quite efficient. But too frequently, these conditions aren't all met. And when that's the case, capitalism kinda blows. Other forms of economy may blow even worse (though I'm sure other forms can outperform capitalism in certain circumstances), but that doesn't make capitalism good.
What brings this up is farmers destroying their crop, because they have too much. From a capitalist point of view, they're making the right decision. Reduce supply to increase price and profit. I can't see how any economic system rewards destroying a resource (especially one as valuable and necessary as food) more than distributing it could possibly be considered efficient. It seems to me to be an incredible failing of the system of capitalism.
Saturday, March 20, 2010
Suppose you have an idea that is supposed to explain something. If any observation you make can be fit into this idea, then it doesn't have any actual explanatory power.
Now, let me explain. If something has no predictive power, it has no explanatory power. Because, if it can't tell you what to expect in the future, it can't tell you why something happened in the past. That is, if something has can explain something, it should be able to take the explanation, and apply it to something it hasn't seen the result of.
Something that can explain everything has no predictive power. Because it if explains x just as well as not-x, it gives you no reason to expect one over the other.
This is why falsifiability is so important in science. If nothing can show a hypothesis to be wrong, then the hypothesis "explains" everything. If there were anything that it didn't explain, then that potentiality would falsify the hypothesis. An unfalsifiable hypothesis can't explain anything.
Thursday, March 11, 2010
Note: this is not intended to apply to any specific "supernatural" phenomena. This in no way disproves god or ghosts, or whatever. It merely says that if they do exist, they are not supernatural.
*Though, by these definitions, other universes don't exist. They either interact with this universe, in which case they are part of this universe, or they don't, in which case they don't exist. I'm fine with this, since "exists" should imply existing in this universe. It's not really much good to talk about something that exists, but has absolutely no impact on us in any way whatsoever.
Saturday, March 6, 2010
Because everything is blasphemy. Every Christian church blasphemes against Islam and Hinduism and every non-Christian religion (and maybe even some different forms of Christianity too!) every single week. Every religion blasphemes against every other religion, and just about every statement of opinion is blasphemy to someone. Hell, even demonstrable facts can be blasphemous (consider evolution).
Something cannot be considered wrong simply because it it blasphemous, because otherwise, everything would be wrong.
Thursday, February 18, 2010
But it can be taken too far.
Ideas like Platonic Forms are recurrent themes in philosophy. The idea that the abstractions are more real than the things that they represent. This is, perhaps, the greatest folly of philosophy.
What exists is what exists. Our conceptions and abstractions are merely tools we use to understand and manipulate reality.
Monday, February 15, 2010
Really, it should be up to the student. If he thinks he can learn without going to class, he should be allowed to. Then he can be judged by whether or not he really did learn the material.
It seems like just arrogance for professors to require for you to listen to them. As if they were the only way for someone to learn.
I will make an exception, though. Any classes that involve group discussions, or something of that sort, in which not attending can hurt people who did attend, attendance should be mandatory.
Sunday, February 14, 2010
Thursday, February 11, 2010
Most theists are glad that god does exist. They'd prefer the universe with god to the one without him.
It seems like there are very few people who believe one way, but think the other would be better.
So, is this correlation, or causation, and if it's causation, does the belief cause the attitude, or does the attitude cause the belief? I'd speculate that it's some of all three.
Both elements are probably largely influenced by the same outside sources. For example, most of what people believe about god and religion comes from their parents. So, both the belief about god and the attitude toward god will be picked up.
Humans are terribly prone to magical thinking, so the attitude definitely helps foster the belief. If you think something is good, you're more likely to believe that it's true too and the opposite.
And humans also prefer what they have to what they could have. If you've already concluded one way or the other, since there's no way to change it, you'll convince yourself that whatever you have is best. The theists will be more likely to ignore that bad things about god, and the atheists will be more likely to ignore the good things.
It's interesting, and I think it serves to both highlight our cognitive biases and demonstrate just how little actual evidence there is either way. People believe things they don't want to all the time when there's good solid evidence for it. I don't like high murder and rape rates, but I believe them.
Tuesday, February 9, 2010
The first problem is: what if you pick the wrong god? Worship Zeus and Odin could damn you. Oops. Still, you might think it's better to choose some god, rather than none. Picking the wrong god gets the same punishment as picking no god, but picking a god gives some chance of getting the infinite reward.
But it goes further than that. There's no reason to think that any god would give you infinite reward for believing in it and infinite punishment for not believing in it. It's possible, sure. But it's just as possible for a god to punish anyone who believes in it, and reward anyone who doesn't believe in it. Or, the god could reward only those who truly believe, and not just play some silly game. Or, god could capriciously reward and punish people regardless of what they did in life. Or, some other scenario I haven't thought of.
Also, there is a cost to believing that's not incumbent on the non-believer; following the tenets of the religion you've chosen.
With all those different possibilities, there's no reason to think trying to believe in god is better than not trying to believe.
Saturday, January 23, 2010
When was the last time you booted up your computer? Ever thought about why it's called that? It comes from a shortening of "bootstrapping", which in turn comes from the phrase "to pull yourself up by your bootstraps". Originally meant as an example of an impossible task, it eventually came to mean to better yourself without outside aid.
The metaphor got picked up by computing, where it got big, and it got more technical. Probably, because it's used so damn much in computing. Every time you turn your computer on, it has to bootstrap itself to get a complicated program like an OS going.
The process is probably most easily described using compilers. You need a compiler to translate human readable code into machine readable executable. If you start with a small, incomplete compiler for a language, you can bootstrap the compiler. Using only the features available, write a compiler that's slightly better. Use that compiler to make a slightly better one, and so on. Until eventually you have a full fledged compiler and language to go along with it.
But computing isn't the only thing this principle applies to. The concept is well described in Sid Meier's Alpha Centauri.
Technological advance is an inherently iterative process. One does not simply take sand from the beach and produce a Dataprobe. We use crude tools to fashion better tools, and then our better tools to fashion more precise tools, and so on. Each minor refinement is a step in the process, and all of the steps must be taken.That's pretty much how all technological innovation works. Moreover, that's pretty much how all innovation works, technological or not. Technological, linguistic, philosophical, cultural. They all work by making slight improvements on earlier working models. And that's what civilization is. Our ancestors used their civilization to create a slightly better one for use, and we try to use that to create a slightly better one for our descendants.
-Chairman Sheng-ji Yang, "Looking God in the Eye"
Friday, January 15, 2010
I'll start with some terminology.
P(A) is the probability that something is true. For example, let's say I roll a fair die and A is "I roll an even number". Since the die is fair, all outcomes are equally likely, so P(A) = 3/6 = 1/2, since 2, 4, and 6 are even.
~A is not A. ~A then is "I roll an odd number". Also P(~A) = 1 - P(A), for any A. In this case, P(~A) = 1 - 1/2 = 1.
P(A|B) is read probability of A given B. Let's say B is "I roll a number greater or equal to 4". So, if I roll the die, see the number is greater or equal to 4, then the P(A|B) = 2/3, because 4 and 6 are even
Bayes' Theorem states that P(A|B) = P(B|A)*P(A) / ( P(B|A)*P(A) + P(B|~A)*P(~A)). So, if we want to know how likely A is after making some observation, all we have to know is how likely the observation is if A is true, how likely the observation is if A is false, and how likely A was before we made the observation.
If we're not interested in the exact value of P(A|B), but just whether P(A|B) is higher or lower than P(A), then all we need to know is whether P(B|A) is higher or lower than P(B|~A). If P(B|A) > P(B|~A) then P(A|B) > P(A). If P(B|A) < P(B|~A) then P(A|B) < P(A). And, if P(B|A) = P(B|~A) then P(A|B) = P(A).
Relating this back to the other post, I said that not observing evidence for a phenomenon makes that phenomenon less likely. Here's an example: Tigers don't exist. Evidence: There are no tigers in my house. A - Tigers exist. B - No tigers in my house. Now, if tigers do exist, it's very unlikely that they would be in my house. Not the right environment, needs some way to get in, etc., etc. So, P(B|A) = .99999999. But, if tigers don't exist, then it is absolutely impossible for tigers to be in my house. So, P(B|~A) = 1. 1 > .99999999, so P(A|B) < P(A). Of course, there's lot of other evidence and stronger evidence that tigers do exist, so P(A|B) is still very high.
Wednesday, January 13, 2010
If you should be seeing evidence of a phenomenon and you aren't, then that is evidence that the phenomenon doesn't exist. Consider the Michelson-Morley Experiment. It was designed to measure the speed of the Earth through the luminiferous aether. And it found... absolutely nothing. That, and other experiments which failed to detect the aether, overthrew the theory. The absence of evidence was the evidence of absence.
Even in cases where you wouldn't expect to see evidence, the absence of it is still weak evidence of absence. I say this based on Bayesian reasoning. If a phenomenon exists, but you wouldn't expect to see evidence of it given the circumstances, presumably, there's still a non-zero (though small) probability of seeing evidence of it. Whereas, if the phenomenon doesn't exist there is even less probability of seeing evidence for the phenomenon. This means that not seeing the phenomenon does shift the probability of the phenomenon actually existing down, by however small an amount. Of course, depending on the specifics, it could be a very, very weak evidence.
Usually, this sentiment comes up in reference to the existence of god. Just because we don't see evidence of god doesn't mean he doesn't exist. But it does make it less likely. The question then is, how much less likely? If god did exist, what would we expect to see different than if he didn't exist?
Monday, January 11, 2010
Saturday, January 9, 2010
Sometimes, I'm surprised when I realize that a lot of people don't believe in free speech. I guess I shouldn't, really, since the concept is only a few centuries old, compared to millenia of human history.
The incident that really brought it to my attention was a couple months ago. A professor here at Purdue posted an argument against gay marriage on his personal blog. The argument he made was frankly, pretty stupid. But I won't get into that, since plenty of other bloggers have taken care of that for me, and it's really not pertinent.
This was a rather large controversy then, there were many letters to the editor in the Exponent, and other blogs talking about it and such like. What struck me is that so many of those who disagreed with him wanted him to be fired from his position, or otherwise disciplined. When others pointed out the whole freedom of speech thing, the response was that the constitution's guarantee of freedom of speech didn't apply in this situation. That's true, the constitution doesn't forbid an employer from firing an employee for saying the wrong thing.
But that's not the point. When I talk about free speech, I'm not referring to the constitution. I'm talking about the principle. And the principle is that you don't want someone punished just for saying something that you disagree with. Even if it's wrong. Even if it's bigoted.
The whole point of freedom of speech is that it's the only way to determine whether something is wrong or bigoted. Just because people think something is wrong, doesn't mean it actually is. The only way for new ideas to be accepted or rejected is for the ideas to be expressed and critically examined. The appropriate response to someone saying something you disagree with, no matter how strongly, is not to call for their punishment, but to explain why you think they're wrong.
That was the biggest incident, but since then I've noticed other similar examples. Even though free speech is one of the things the modern world was founded on, it seems it hasn't really sunk in yet.