Daniel McKay
Members-
Posts
22 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Daniel McKay's Achievements
Quark (2/13)
4
Reputation
-
Dim - This is an odd response to me pointing out that you have already made this point and I have already answered it.
-
Dim - yeah, you made that point right at the start, and I responded to it then too.
-
Dim- It would be wrong to offer money in return for something and then, upon recieving the thing requested, not pay the money. If someone solves the problem, I will pay them the money. If someone makes a significant contribution to solving the problem, I will pay them part of the money.
-
I mean, my post sets rules regarding me giving away my money. It isn't the reason for moral facts existing in the world assuming that they do.
-
Nobody "sets" the rules. Things are either right or wrong (edit: or neither). People's opinions, or indeed country's laws don't affect that one way or the other.
-
Dim - I'm not sure I understand the question, but I'll try to answer the spirit of what I think you're asking and you can tell me if I'm off. I would say that the reason we should think that freedom (of the sort I have just described) is of objective moral value is that it can apply to all moral agents in all possible worlds and, indeed, might well be the only coherent measure of value that can do so.
-
Dim - I'm not sure what you mean by your second comment. By "freedom" I mean the ability of free (possessing free will), rational (possessing the capacity for rationality) agents (conscious entities capable of making some choice) to understand and make those choices that belong to them.
-
Dim - I mean, it doesn't answer in any way as it appears to be aimed at a different question. I'm looking for a solution to how to weigh freedom over different kinds of things, not just in a pragmatic "what do you want to do", hypothetical imperative kind of way, but in a universal, objective, categorical imperative kind of way. It isn't about how to determine the value of your own freedom to do various things in relation to one another, it's about how you determine how to value the freedom of different people to do different things against one another correctly.
-
I agree that it isn't a problem for morality. I'm not sure what you mean by what is the problem. You were the one that raised this as if it were a problem. I suppose it is a problem in the sense that it's a bad thing that someone is doing something wrong, but people doing bad things is not something that causes issues for any moral theory.
-
Assuming that you're imagining the person doing something wrong, that isn't a problem for ethics. People do things that are wrong all the time, they don't become less wrong because someone did them.
-
Yeah, I understand what you mean. But again, you are assuming that morality is something to do with the moral systems that social creatures come up with in order to live together more harmoniously. I don't think it is. I think moral truths (if there are any) are there whether or not we find out about them and, if the solitary predator ever becomes sufficiently rational to be a moral agent, then it would be bound by the same moral rules as any of us. I tend to agree that personhood is quite hard to develop in solitary predator species, but it seems plausible that it has probably happened at least a few times somewhere in the universe, either through a great deal of luck or perhaps through the artificial selection of another species trying to make it happen. Lions as they are now seem fairly clearly not to be moral agents. But if we were to breed a lion that was a free, rational agent in the sense that we are (putting aside the pretty substantial practical problems with that), it would be morally wrong for such a lion to maul a human to death (at least, most of the time it would be)
-
I would say that our moral intuitions are a combination those social factors and evolutionary ones, but that is only an issue if we think our moral intuitions are a good indicator of moral truth. Given where our moral intuitions come from and how inconsistent they are, I think we can safely say that moral intuitions are not a good indicator of moral facts. Saying "Objective biology based aspects" suggests to me that we might be taking quite different metaethical approaches that may need to be made explicit to avoid talking at cross-purposes with one another. I would say that moral facts, if they exist at all, are objective and universal facts that apply to all free, rational agents across all possible worlds. That perspective isn't really connected with what things happen to harm cooperative species. I suspect you are of a different metaethical view, would that be fair to say?
-
You are incorrect. It is in fact a problem.
-
- Dim: The problem that I laid out in the initial post.
-
Not particularly familar, but I gather he is a biocentrist. I don't agree that things are worthy of moral consideration merely because they are alive though (or that they are not simply because they are not, though I suspect that, depending on how you define "alive", that may be less relevant). As to your second point, I may need some clarification on it but I will do my best to answer it. I certainly didn't mean to imply that we couldn't imply scientific rigour to the study of moral psychology, merely that I think we are taking different approaches to what the goal of morality is. I think I am right in saying that you are suggesting that morality is about our moral intution and the things that we value. What I am saying is that it is not, it is about moral truths that would be the same regardless of whether we knew about them and about what actually has value, regardless of whether we value it. That is certainly not to say that moral psychology is a fruitless field of course, merely that it is a different field from ethics. Unless you are saying that we can infer morality among moral agents by reference to our moral psychology because ought implies can, in which case I am sympathetic to that aim, but we would need to consider all possible moral agents, not just the ones that we have access to. That still wouldn't be coercive though. The hypothetical racist might well have such a gut feeling, but we are capable of raising above those feelings and acting sensibly instead. Their choice isn't taken away from them. They could still, for example, vote for sensible policies that go against their gut feelings. We can recognize that our moral intuitions are wrong and act against them. As to your last point, two things. First, I don't think a priori reasoning is as difficult or hopeless as you seem to suggest it is. Second, I don't have any issue with taking context into account when determining what the right thing to do is. My theory is a consequentialist one after all, so the context is going to matter a fair bit. Whether tackling a little old lady to the ground is a good thing or not might depend very much on whether they are about to be shot by the notorious Little Old Lady Murderer. I'm certainly not suggesting that context has no place in moral decision making. Not that you have really said that I am saying such a thing, I just wanted to clarify that I wasn't.