Main »

Thought Lengths

I haven't yet copied over all of the images from the source on LW. This class doesn't feel like it obviously corresponds to a pattern instead of something more like the 'time dimension', but probably it does and I'm not looking at it right.

If I say “Hi, how are you?” and you live in white middle class America, you’ll almost certainly say something resembling “Pretty good, you?” If I ask something like “What’s happened this week that you’ll remember five years from now?” I’ll get a response that’s a lot less predictable, but it’ll most likely be made out of words that I at least sort of understand.

There’s a lot going on in the space between question and answer, and thanks to the work of generations of psychologists and neuroscientists (and a few unlucky souls with iron rods through their brains and so forth), we’re getting closer and closer to having some clear/workable/reliable causal models.

We don’t have them yet, though, and while we’re waiting, it’s interesting to see what we can accomplish if we don’t even try. Call it a black box, and treat humans as complicated input/output devices with a whole bunch of levers and knobs—a stimulus goes in, some stuff happens under the hood, and a response comes out.

Imagine the stimulus/response pattern as a ray or vector, and your mind as a surface. The external, sensory universe is everything above the surface, and the internal, cognitive universe is everything below. Something—say, a question—sparks a line of thought, and that line of thought leads to something else—like an answer.

If the stimulus/response doesn’t take very long (it’s an easy question, or a familiar motion like catching a tossed ball, or a visceral response like one’s reaction to a strong smell), then in our model the line will be short, as will be the distance between the input and the output.

If, on the other hand, there’s significant processing involved, then we can imagine a much longer line, and a greater distance between input and output:

“Let’s see, 50 x 30 would be 1500, so 47 x 30 would be three thirties less than that, or 1410, and then we need to add a couple of forty-sevens, so … 1410 + 94, which is 1504. I’m like … ninety percent confident, there?”

In the example above, the thought process is fairly straightforward (at least for people who are comfortable with mental math). Once you’ve picked a strategy, it’s mostly just churning away until the calculation is complete.

There are plenty of stimuli, though, that don’t cause a straight march from stimulus to response, but instead send us all over our own minds, activating a large number of concepts and processes before finally cashing out to some new conclusion or action:

And furthermore, there isn’t always a single line. Sometimes, the same stimulus can spark multiple threads of thought, each of which will have its own length and path.

It’s also kind of fun to imagine what happens when things get subconscious, such as when we find ourselves making connections or entering emotional states that we can’t fully explain or justify. It’s pretty easy to imagine a second, deeper, opaque-ish surface that represents the limit of what we can “see” with our metacognition, but we’ll hold off on that for now, lest we summon the ogres.

Astute participants may be thinking "isn't this just System 1 and System 2 again?" and there is certainly a lot of overlap with that model (which is another wrong-but-useful approximation).

However, where S1 and S2 are discrete (or at least discrete-ish), this model instead treats the range of possible thoughts as continuous. There is no single bucket for "short thoughts" that is distinct from a single bucket for "long thoughts." Instead, all thoughts are treated as the same basic sort of thing: some amount of below-the-surface processing, ending in an output.

What makes this model interesting from an applied rationality perspective is that it raises the question of whether a given thought is an appropriate length.

Some thoughts are too short, and need to be lengthened, and some CFAR techniques can be thought of as designed to do precisely that. Think of goal factoring and focusing, for instance, which take flinches and decisions that might otherwise be somewhat knee-jerk, and slows down and fleshes out and expands them, allowing for more processing before a final output.

Other thoughts are too long, and need to be shortened. CFAR has fewer named techniques in this arena, but TAPs and CoZE both play in this space, as well as Resolve Cycles. The whole concept of policy-level decisionmaking is similarly a thought-shortening frame—the idea being to set a policy so that future instances of a given problem or scenario can be addressed quickly and without a lot of meandering.

It's interesting to ask oneself the question "Where do I go wrong because I put in too little thought, and arrive at my outputs too quickly?" and it's worth asking the mirror question "where am I spending too much time and attention, and should instead be working to shorten the processing time between stimulus and response?"

Some stimulus/response patterns that tend to be too short for many people:

Sudden changes in plans, which cause them to grumble and grouse even if the new plan is better Unanticipated requests for time or energy, which often lead people to overcommit and make promises they start to regret later Rounding-off, in which people halo-effect or horns-effect other people, plans, or activities, losing opportunities to factor or mix and match. CFAR canon has a handful of techniques that are good at increasing the distance between input and output, and once you get “some thoughts are shorter than they ought to be” into your head as an organizing principle, you may find yourself reaching for those techniques more frequently and more appropriately.

Conversely, some stimulus/response patterns that tend to be too long:

…and again, there are techniques that can help. Being able to think “oh, this is a line of reasoning that I should be able to skip to the end of, or at least cache somehow once I finish, so that I can simply call it back up and don’t have to rederive it every time,” has been a big net positive for many people.

Finally (though this is a small benefit), the simple visual metaphor of moving the exit point for a given thought can help with things like non-useful emotional triggering during intense conversation. The above model has helped some CFAR staff recognize certain ... golf holes? Geysers? Lava tubes? ... where their thoughts tend to drift, and given them a clear way to evaluate potential replacements ("Is this new kind of 'answer' sufficiently far enough from my old habits and reflexes that I won't just slide right back into my previous ingrained behavior?").

It’s neat that this model post-dicts a lot of things that make sense for entirely different reasons (such as slowly counting to ten before speaking, or rehearsing a given mental process until it becomes easy). As far as “tools you could teach a ten-year-old” go, we posit that this one has a lot of potential in terms of its sensibility and versatility.