Mirror Field
Back to all posts
5 min read

The role of values in non-optimizable decisions

Why some hard decisions resist every algorithm thrown at them, what makes a decision non-optimizable, and how values clarification — done honestly — actually helps.

The role of values in non-optimizable decisions

A non-optimizable decision is a decision where there is no formula, framework, or amount of additional information that will produce the right answer. The options express different goods, and you cannot rank them on a single scale because the scales themselves are different. Most popular decision-making advice is implicitly written for optimizable decisions, which is why it tends to be unhelpful exactly when you most need help.

What makes a decision non-optimizable

Three features.

The options express different kinds of good. The job that pays well versus the job that does work you find meaningful. The relationship that's stable versus the relationship that's exciting. The career that builds public capital versus the one that gives you long uninterrupted blocks of time. These are not different amounts of the same good; they are different goods. There is no exchange rate that converts meaningful into well-paid without the person making the conversion having to invent the rate.

The criteria you'd use to choose include criteria that conflict. Most non-optimizable decisions involve at least two values you genuinely hold that point in different directions. I want to be there for my family and I want to do the work that calls me both belong to the same person, and they sometimes pull against each other. No optimization across the two will produce a clean ranking; the conflict is the situation.

You cannot model who you'll be after. Some decisions change the chooser. The you that takes the new job is not exactly the you who is choosing whether to take it. The you who has the child or doesn't, who moves countries or doesn't, who ends the relationship or doesn't, is partly created by the decision. You cannot ask the post-decision you what they would have chosen, because the post-decision you only exists on one of the branches.

When all three features are present, no algorithm can solve the decision. The dual-process and naturalistic-decision research surveyed in how people actually decide is silent here, because both of those traditions describe how decisions get made under uncertainty about the world, not under uncertainty about what you want. Values clarification is the activity that takes the place of the algorithm, and it is more interesting, and more honest, than the wellness-content version usually presents.

What values clarification actually is

The popular version of values clarification is a workbook exercise: rank the items on a list of values, identify your top five, use them as a guide. This produces a kind of output, but it produces it in a way that misses what makes values useful in actual decisions.

Real values clarification has a different shape. It involves looking at how you actually allocate your time, money, attention, and energy now, and noticing what those allocations imply about what you're already valuing — not what you'd say you value if asked. The implied values from how you actually live are usually different in interesting ways from the stated values from how you'd describe yourself.

The gap between stated and implied values is the working surface. Three honest questions:

What am I actually spending my time on, this month, that doesn't appear on my stated values list? Often something is. Whatever it is, that thing is being valued by you, even if you wouldn't have named it. The question is not whether to keep valuing it; the question is whether to name it honestly, so it can be in the decision.

What is on my stated values list that I'm not actually allocating time to? Often something is. The mismatch is usually one of: the value was inherited rather than chosen, the value was aspirational rather than active, or the conditions of life have shifted in a way that the stated value hasn't yet caught up to. Each of these is useful information for a current non-optimizable decision.

Which of my values, if I had to choose, would I let myself express more fully? This is the question the popular ranking exercise gestures at without quite asking. More fully means: you would let this value be visible in your actual allocations, even at the cost of others. The values you'd be willing to make visible are a different set than the values you'd nominally rank as important.

How this helps the decision

After honest values clarification, the non-optimizable decision is still non-optimizable. No algorithm has appeared. But the decision has a different shape: you can see which values each option would express and which it would suppress. That mapping doesn't tell you what to choose, but it converts the decision from which option is correct? to which suppression am I willing to accept? This is a more honest framing of the actual choice.

Most people, asked the original question, can't answer it. Most people, asked the reframed question, can.

The reframing also clarifies a category of decisions that look hard but are actually waiting on values clarity, not on more information. If you keep returning to a decision and the new returns produce no new information about the situation, the work is values, not facts.

A practical exercise

a smooth river stone and a small corked amber bottle resting side by side on folded linen, neither prioritized, soft warm tones, abstract

For a current non-optimizable decision in front of you, take ten minutes. Don't write a list of values. Write three sentences:

  • What this option would let me do more fully:
  • What this option would let me suppress or postpone:
  • Whether the suppression is one I am willing to live with:

Do this for each option you're considering. The sentences are usually more revealing than any ranking exercise. They produce a description of the choice as it actually exists, not as the optimization framing pretends it exists.

The decision still has to be made by you. No exercise hands you the answer. What the exercise gives you is a clearer view of what kind of person you would be choosing to be across the available options. Most people who have done this honestly find that the choice is not much closer to obvious, but the discomfort of the choice is now in the right place: it is the discomfort of giving something up rather than the discomfort of not knowing what to do.

If you'd like a structured frame for this kind of values work on a specific decision, a Mirror Field session is built for the kind of looking that doesn't try to optimize what cannot be optimized.

You may like