I have wondered for some time about the problem with totally rational ethical systems. Like hedonism for example and utilitarianism too. I have always been very skeptical of them. I tend to think the ethical theory I like best is one centered on rights and freedoms, along with responsibility too of course. I don't know if that in itself is an ethical system though. I have heard people say things like utilitarianism are useful from a scientific standpoint in crafting laws. Like for example if the law will help the greatest number of people in the best way. I actually do have a question about hedonism. And a couple of questions, or at least hypothetical criticisms, of utilitarianism, that I'd like to give here. For hedonism. Could someone have a system that is purely hedonism? Because I tend to think pain is the sole intrinsic evil. But does that mean that pleasure is the sole intrinsic good? I tend to think not. For a couple of reasons. For one thing, as people after Jeremy Bentham noted, people who are intelligent tend to be more worried and less happy. And as John Stuart Mill pointed out, some pleasures that intelligent people feel tend to be more complicated and less primitive. I think a healthy life that is well-rounded is best. I don't know if "healthy" used in that way has a fixed scientific meaning. But that is what I tend to think. Also I got this book by J. J. C. Smart around 1994. He was apparent for utilitarianism. But he brings up a couple of good points in the book. Life is full of pain and displeasure. Really the best thing to do then would be to strap someone in a chair against their wishes, shave their head and attach electrodes to their scalp. Then (with a feeding tube of course) feed them pleasurable sensations all day long. Or why stop there. He points out, that humans tend to reason. So they tend to worry more than other animals. Why not just raise field upon field of contented sheep? He supported utilitarianism, as I said. But he did have some controversial views. He didn't believe we should honor people's last requests, even if we led to believe we would. I tend to disagree with that. Plus I don't think he brings it up in the book. But as I've said before, I think things like moral engagement are a good argument against utilitarianism. Actually, I came up with two argument examples against utilitarianism a while back. And I'd like to share them here. Example A. In the ideal utopia of tomorrow, a woman loses her husband. She is beside herself with grief and can't move on. The benevolent world state tells her she's being selfish. And they decide to erase her memories of her husband. So she doesn't even know he ever existed. She says no, I don't want that to happen. Plus I don't want to move on. I want to be miserable the rest of my life. My question is, that may lead to more pain in the world. But isn't that her choice if she wants to do that? Example B. Again, in the ideal utopian state of tomorrow, the state decides to suppress all knowledge of the past. The Inquisition, the genocides, the torture, the human right abuses. Because they think it might upset people. Would that be all right? I have to add that there is no chance those things could ever happen again. So it's not a question of learning from past mistakes. But people like to think they have a right to know about those things, don't they? And suppressing that information even for benevolent reasons is just wrong. Anyways, as you can see I have one question. And a couple of hypothetical situations to be critiqued. Thank you in advance for your responses.