Certainly there are cases where this works, but consider a thought experiment: the prisoner's dilemma. The setup goes like this.
Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?
Here it is in table form. It's written in terms of utility instead of disutility, and the numbers don't exactly match up, but it's only the relations that really matter.
|Cooperate||9 / 9||0 / 10|
|Defect||10 / 0||1 / 1|
Deciding which action to take using rational self-interest, the answer is to defect. This is because defecting dominates cooperation - no matter what the other prisoner does, you will always be better off (if only slightly) by defecting. If the other prisoner cooperates, you can get 9 utils by cooperating or 10 by defecting, and if the other prisoner defects, you can get 0 utils by cooperating or 1 by defecting.
But this results in a situation that is obviously not optimal. Both prisoners defect, and end up in the worst overall situation. The best outcome would be for both prisoners to cooperate, but they have no incentive to not defect.
This particular situation may seem contrived, but there are plenty of situations in real life that follow the same pattern. Wikipedia gives a lot of examples. This poses a problem to the initial premise that everyone looking after their own best interest will result in the best overall situation.
It seems to me that a better way to go about it is not to use rational self-interest, but rather rational pan-interest. Instead of comparing different actions by how the results affect you, compare them by how the results affect everyone together.
Of course, in the real world, this solution has problems: how do you know what's best for everyone else, how do you avoid getting deceived or tricked, etc. But I think ordinary rational self-interest has most of the same problems, in real life.