Where "the Sequences" are Right II
Map & Territory (2/2): Noticing Confusion and Mysterious Answers
Let’s jump straight to what’s right: noticing confusion! Yes, this is a great point, and I think it’s one of the most helpful pieces of advice given so far. This is central to both the first and second parts, because “noticing confusion” is one of the main ways to avoid answering a question with a mysterious answer (an answer which does not reduce confusion). The key, simplified intuition here is that one should not try to fight confusion with more confusion - even if confusion is draped in “mystery” (we might think that a mysterious phenomenon requires an equally mysterious answer).
While we’re still in Yudkowsky’s bread-and-butter territory, we expect to see mainly the kind of knowledge distilled from him that should be considered his strong suit, as well as, sadly, the boring part.
Yes, the boring part is where we expect him to be the most right, as well as have the most actually helpful information. Maybe at another time, in a deeper article, I will write about why I think that this aesthetic is correlated to utility (and most likely, does not have to be, one hopes).
It is not quite perfect. My minor quibbles remain basically the same as they were since the very first part. Essentially, Yudkowsky will sometimes take an (IMO) negative view of human flaws. This departs from my intuition about how intelligence works, somewhat. I think of humans as having evolved in a rather continuous fashion, therefore distinct “levels” of intelligence, while possible, are not a given. We don’t really think of monkeys as “irrational”, just simpler and less intelligent than we are. In theory, a “well-calibrated” monkey does pretty well in the environment it lives in. It does not conjecture about whether the price of a stock will go up or down, and it doesn’t make “bad bets” on them at all, because as long as it stays in the environment it evolved in, it would never encounter anything remotely as complicated.
Human beings, however, put themselves and each other to countless tests. This is shown most saliently in “Positive Bias: Look Into the Dark” and “Lawful Uncertainty.” Both of these demonstrate actual psychological tests which were designed by experimenters to cause the experimentees to screw up. These screw ups were analyzed and given a name. In both cases, the experimenters clearly had expectations about what was going to be observed. These were not problems naturally pulled from the environment.
Finding key, nuanced insights about how to actually reason better always score highly by me.
When it comes to how not to reason, I think that this can have value, if the correct sources of the poor reasoning are identified. But I think that this actually is quite difficult, except for when there are clear signs of what we might think of as “motivated bias.” As far as that goes, I believe that will actually be spoken of more in the book after this one.
Contrary to popular opinion, I don’t believe that both positive and negative insights work exactly the same way. I’ve had trouble communicating this belief before, especially in rationalist spaces, so it might be worth belaboring a bit.
I think of my view as being less black-and-white than the default view. The default view, as I’ve come to look at it, does indeed treat ideas as mostly either correct or incorrect. That would include rationality practice as well, even though that is mostly a meta-thinking enterprise. But I think meta-thinking is where black-and-white thinking can potentially cause the most downstream problems. This is mainly observed (empirically) in the way that it implies catastrophe to be a major risk, almost anywhere, which is also the way that it is disproven.
Noticing (one’s) confusion and searching for the idea which best reduces it, continually, is not black-and-white thinking, I want to emphasize!
I’m not going to write a conclusion, so if you are satisfied with a summary you can stop reading here. The rest of this will be my notes and scores with the final tally at the end.
Noticing Confusion - (1110/1200)
Focus Your Uncertainty - (95/100)
Still . . . your mind keeps coming back to the idea that anticipation is limited, unlike excusability, but like time to prepare excuses. Maybe anticipation should be treated as a conserved resource, like money. Your first impulse is to try to get more anticipation, but you soon realize that, even if you get more anticipation, you won’t have any more time to prepare your excuses. No, your only course is to allocate your limited supply of anticipation as best you can.
If you want to constrain your future expectations, to narrow down the possibilities into a set of things, you need to focus your uncertainty. There are natural reasons to want to do this.
What Is Evidence? - (90/100)
This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.
Evidence is that which allows one to determine that a state is in one of several possible states.
Scientific Evidence, Legal Evidence, Rational Evidence - (85/100)
As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone’s authority. Science is the publicly reproducible knowledge of humankind.
I don’t think this is wrong but I am maybe not that enthusiastic about belaboring these distinctions too much.
How Much Evidence Does It Take? - (100/100)
It is convenient to measure evidence in bits—not like bits on a hard drive, but mathematician’s bits, which are conceptually different. Mathematician’s bits are the logarithms, base 1/2, of probabilities. For example, if there are four possible outcomes A, B, C, and D, whose probabilities are 50%, 25%, 12.5%, and 12.5%, and I tell you the outcome was “D,” then I have transmitted three bits of information to you, because I informed you of an outcome whose probability was 1/8.
Higher score for being more precise and explaining how to do something with concrete calculations.
Einstein's Arrogance - (100/100)
It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?
This is kind of interesting and the comments (while low quality) are still thought-provoking. This is somewhat pro-”Rationality” (of the old-school meaning of the word).
Occam's Razor - (100/100)
There is. It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.
This doesn't explain why Occam's Razor is true, but it does make it seem justified in that it is kind of the "nicest" theory.
This also helps explain something about the last post, as well - why Einstein was justified in feeling confident about his theory (it was a relatively mathematically simple way of explaining empirical observations).
Your Strength As A Rationalist - (80/100)
We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.
So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
I think he’s overreacting to the sinfulness of this mistake, but I think he could have related this story anyway. A priori, I think actually predicting the right outcome in a situation isomorphic to this one is kind of difficult to do perfectly.
Absence of Evidence is Evidence of Absence - (100/100)
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.
The second quote there is extremely helpful and basically explains the point.
Conservation of Expected Evidence - (100/100)
If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.
So if you claim that “no sabotage” is evidence for the existence of a Japanese-American Fifth Column, you must conversely hold that seeing sabotage would argue against a Fifth Column. If you claim that “a good and proper life” is evidence that a woman is a witch, then an evil and improper life must be evidence that she is not a witch. If you argue that God, to test humanity’s faith, refuses to reveal His existence, then the miracles described in the Bible must argue against the existence of God.
This is a straightforward argument against back-chaining.
Hindsight Devalues Science - (100/100)
Against hindsight bias, as a similar thing to back-chaining.
The measure of your strength as a rationalist is your ability to be more confused by fiction than by reality.
In other words, it's healthy to stay somewhat confused by text you read than immediately take it as a given fact.
This piece also has a good example of a few pretend claims you can be “more confused by” as a test of this.
Illusion of Transparency: Why No One Understands You - (75/100)
June recommends a restaurant to Mark; Mark dines there and discovers (a) unimpressive food and mediocre service or (b) delicious food and impeccable service. Then Mark leaves the following message on June’s answering machine: “June, I just finished dinner at the restaurant you recommended, and I must say, it was marvelous, just marvelous.” Keysar (1994) presented a group of subjects with scenario (a), and 59% thought that Mark’s message was sarcastic and that Jane would perceive the sarcasm.1 Among other subjects, told scenario (b), only 3% thought that Jane would perceive Mark’s message as sarcastic. Keysar and Barr (2002) seem to indicate that an actual voice message was played back to the subjects.2 Keysar (1998) showed that if subjects were told that the restaurant was horrible but that Mark wanted to conceal his response, they believed June would not perceive sarcasm in the (same) message.
Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.
Actually ironically kind of related to this chapter, I would have preferred Yudkowsky did not include these statistics reportedly from social science studies, and just presented the question by itself to the reader each time.
I think they somewhat exaggerate the reaction I'm supposed to have to being notified just how screwed up other peoples' thinking is, instead of just worrying about mine.
Expecting Short Inferential Distances - (85/100)
Combined with the illusion of transparency and self-anchoring (the tendency to model other minds as though the were slightly modified versions of oneself), I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A tad bit of black-and-white detected.
This is primarily relevant to the problem of researchers in separated disciplines trying to communicate their research to each other, or from researchers to the public. The main problem is that most educated people underestimate how much they need to explain something to someone else to get them to understand.
There’s a small subtlety that seems off to me here. This really matters in a “debate-like” atmosphere, as opposed to a purely educational one, and I think this is where Yudkowsky himself (years after writing this) encountered the most friction. So there may be other processes at play here leading up to these observations.
Mysterious Answers - (1295/1600)
Fake Explanations - (70/100)
Consider the student who frantically stammers, “Eh, maybe because of the heat conduction and so?” I ask: Is this answer a proper belief? The words are easily enough professed—said in a loud, emphatic voice. But do the words actually control anticipation?
This is where I would say that "self-help"-optimized pedagogy and "debate"-optimized pedagogy somewhat interact and are not entirely compatible, in my opinion.
I’m subtracting points because I think that in a semi-adversarial situation between humans (regardless of the final outcome intended by the adversarial party, in this case the teacher), it is not quite epistemically accurate to blame the students entirely for their mistakes.
Yudkowsky “blames the victim” (including himself) more than once throughout these chapters, and I do have to subtract several points here and there for this.
Guessing The Teacher's Password - (85/100)
This is, however, a good explanation for where people develop the habit discussed in the previous chapter.
There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? Surely, the answer “Waves!” is either true or false, right?
I think it's also subtly wrong, too, unfortunately, but just slightly: "Made of waves" can be a hypothesis, but it is only to the extent that the words map to real things. It's not always said in the context of pure memorization. Another question: How badly do most schools emphasize pure memorization like this? That’s a familiar complaint, but it seems better schools probably do this less.
Science As Attire - (75/100)
I encounter people who very definitely believe in evolution, who sneer at the folly of creationists. And yet they have no idea of what the theory of evolutionary biology permits and prohibits. They’ll talk about “the next step in the evolution of humanity,” as if natural selection got here by following a plan. Or even worse, they’ll talk about something completely outside the domain of evolutionary biology, like an improved design for computer chips, or corporations splitting, or humans uploading themselves into computers, and they’ll call that “evolution.” If evolutionary biology could cover that, it could cover anything.
This is like “God did it!” but as “Science did it!” Yes, okay, this probably happens. How much utility does it give the rationality practitioner? Probably above zero to the extent that it ungaslights them about noticing this phenomenon. Once again, we have an observation that may point to adversarial politics underneath the hood.
Things like “God did it!” became more noticeable when Science appeared to be engaged in a battle with Religion. Before then, people may not have disagreed that “God did it” per se, but they may have also been rather agnostic to exactly how God did it.
Fake Causality - (65/100)
Of course, one didn’t use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to explain it. It’s not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, “The air must have become saturated with phlogiston.” You couldn’t even use phlogiston theory to say what you ought not to see; it could explain everything.
This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t feel fake. That’s what makes them dangerous.
Before modern technology, people would have had to make do with organizing only the observations of their five senses and the symbols they had available to communicate those things.
Therefore, observations consisted mainly of aesthetic textures, shapes, and flavors. To the extent one could mechanically describe how these things interacted, they would have had to have been crude and rudimentary. But, not necessarily useless.
By treating the first and the second sentences from the quote above with some moderate degree of skepticism, I feel I am only doing what Yudkowsky has asked me to do throughout this book.
Semantic Stopsigns - (75/100)
Jonathan Wallace suggested that “God!” functions as a semantic stopsign—that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point.1 Saying “God!” doesn’t so much resolve the paradox, as put up a cognitive traffic signal to halt the obvious continuation of the question-and-answer chain.
Same score as “Science as Attire” for basically similar reasons.
Mysterious Answers To Mysterious Questions - (85/100)
But the deeper failure is supposing that an answer can be mysterious. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. In doing so, they mixed up the map with the territory. All confusion and bewilderment exist in the mind, not in encapsulated substances.
Okay, here he states somewhat of the thesis of this book: “People tend to plug mysterious gaps in their knowledge with mysterious stuff.”
So, one infers, aesthetic texture “mysterious” is observed in a question, and from somewhere, an answer is generated with this same aesthetic.
This is the opposite of “noticing confusion (and reducing it)” so one assumes that if you aren’t doing that, you’d be doing this instead.
But why are you doing this instead?
The Futility of Emergence - (90/100)
Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a market crash and saying “It’s not a quark!” Does that feel like an explanation? No? Then neither should saying “It’s an emergent phenomenon!”
An example of a mysterious answer. Apparently, some scientists defend the concept of emergence as being a counter-argument to the idea of reductionism.
Say Not "Complexity" - (80/100)
I said, “Complexity should never be a goal in itself. You may need to use a particular algorithm that adds some amount of complexity, but complexity for the sake of complexity just makes things harder.” (I was thinking of all the people whom I had heard advocating that the Internet would “wake up” and become an AI when it became “sufficiently complex.”)
I think stating my reasons for why I took a few points here would be somewhat redundant.
However, I did give it less than the previous one, and perhaps I should try to articulate why: I think it’s hard to map this mistake to anything similar in my own mental map. So in order to actually correct this error, I would have to identify a similar structure in my own mind.
My inability to do that could be due to several reasons. However, I would slightly prefer that more effort is made to identify similar root causes, especially when analyzing mistakes (instead of better principles), so that it can be turned into a better principle.
Positive Bias: Look Into the Dark - (90/100)
The study was called “On the failure to eliminate hypotheses in a conceptual task.” Subjects who attempt the 2-4-6 task usually try to generate positive examples, rather than negative examples—they apply the hypothetical rule to generate a representative instance, and see if it is labeled “Yes.”
This cognitive phenomenon is usually lumped in with “confirmation bias.” However, it seems to me that the phenomenon of trying to test positive rather than negative examples, ought to be distinguished from the phenomenon of trying to preserve the belief you started with. “Positive bias” is sometimes used as a synonym for “confirmation bias,” and fits this particular flaw much better.
This (and the next one) are more interesting. This is a concrete task. However, it is also another semi-adversarial task. So that means I’m deducting a couple of points, however, because it is a concrete task, I will not deduct too many.
I ran this problem on ChatGPT, and it got the right answer using three guesses that all were labeled “yes.” It actually did not test incrementing by two at all. So that’s interesting. I would like to run a larger test with ChatGPT if possible someday, to see if there’s any evidence of “positive bias” in it as well.
How you choose to test hypotheses depends on your ability to generate them as well as your prior over hypotheses. If you were aware that this might be a deceptive experiment, you would presumably be far more inclined to be less lazy.
Lawful Uncertainty - (80/100)
What subjects tended to do instead, however, was match probabilities—that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); (.70×.70) + (.30×.30) = .58.
Yudkowsky says there is a "deeper flaw" going on here.
I wouldn’t fault a subject for continuing to invent hypotheses—how could they know the sequence is truly beyond their ability to predict? But I would fault a subject for betting on the guesses, when this wasn’t necessary to gather information, and literally hundreds of earlier guesses had been disconfirmed.
People see a mix of mostly blue cards with some red, and suppose that the optimal betting strategy must be a mix of mostly blue cards with some red.
It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
People can't say "0.7 blue, 0.3 red" as their guess, but one suspects that was basically what they were trying to do. Knowing that you must guess either one or the other each round, the optimal guess is the more frequent one each time.
And so there are not many rationalists, for most who perceive a chaotic world will try to fight chaos with chaos.
Interestingly, there is a similar problem that comes up in machine learning. There is a large variety of different scoring rules - “loss functions” they may also be called - and actually maximizing your score might produce a slightly different output from the model.
For example, in this problem, we have the base rates p(blue) = 0.7, p(red) = 0.3. And so, it seems the best model overall should simply just learn and output these numbers.
That would be true for minimizing cross entropy loss, but not for maximizing predictive accuracy! (L(x) = -0.7log(x) - 0.3log(1-x). dL/dx = -0.7/x + 0.3/(1-x) = 0. x = 0.7.)
So it was possible for these poor suckers to be both righter and wronger. But we judge them nonetheless.
My Wild and Reckless Youth - (80/100)
As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right?
When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.
I guess the question is: Is this hyperbolic? If so, does that matter? I can’t say I’m totally decided on that yet. Otherwise, it makes essentially the same point as the other essays.
I think the interesting question throughout this book is why people are inclined to feel like “quantum” actually provides an answer to the problem of consciousness. Yudkowsky says it simply fills a mysterious gap with something mysterious.
I don’t actually remember a time when I ever assigned much credence to the idea of consciousness having some relationship to quantum theory. Perhaps the furthest that it ever got was when I was considering the “many-minds” interpretation (precursor to many-worlds). In some hand-wavy, extremely vague sort of way, perhaps we are literally a “world” so to speak, that’s why it seems like we are one observa-verse, and I don’t experience the whole universe all at once, only from one vantage-point. We do know that some physicists, at some point, did believe “observations” were kind of “special”, in quantum theory, in terms of systems interacting with other systems, via collapsing the wave-function and that sort of thing.
This sort of thing has never gotten more with me than “weakly tantalizing”, however, because it doesn’t explain much at all! That being said, it doesn’t feel tantalizing because it is a mysterious answer. It seems weakly tantalizing because it hints at the possibility of reducing confusion by several “microns” of understanding-units.
Failing To Learn From History - (75/100)
My younger self did not realize that solving a mystery should make it feel less confusing.
This is important and true.
I thought the lesson of history was that astrologers and alchemists and vitalists had an innate character flaw, a tendency toward mysterianism, which led them to come up with mysterious explanations for non-mysterious subjects. But surely, if a phenomenon really was very weird, a weird explanation might be in order?
It was only afterward, when I began to see the mundane structure inside the mystery, that I realized whose shoes I was standing in. Only then did I realize how reasonable vitalism had seemed at the time, how surprising and embarrassing had been the universe’s reply of, “Life is mundane, and does not need a weird explanation.”
I’m not actually sure that his relative certainty here about history is that justified. Again, this is kind of a subtle attitude. I do not know for sure if astrology/alchemy/vitalism did absolutely nothing in their “top form.” Our understanding of these subjects today is that they were complete hokum, and did nothing whatsoever of value (possibly having even negative value). So looking back at history in this lens would necessarily demand an answer of the form “maybe people just liked being confused.”
We also have another option, which is that people liked to believe in literal “magic” because it made them feel cool and special, like a religion or secret society. This is a different motivation, which needs to be taken into account.
Making History Available - (90/100)
So the next time you doubt the strangeness of the future, remember how you were born in a hunter-gatherer tribe ten thousand years ago, when no one knew of Science at all. Remember how you were shocked, to the depths of your being, when Science explained the great and terrible sacred mysteries that you once revered so highly. Remember how you once believed that you could fly by eating the right mushrooms, and then you accepted with disappointment that you would never fly, and then you flew. Remember how you had always thought that slavery was right and proper, and then you changed your mind. Don’t imagine how you could have predicted the change, for that is amnesia. Remember that, in fact, you did not guess. Remember how, century after century, the world changed in ways you did not guess.
Our civilization probably is more rational than previous ones.
Explain/Worship/Ignore? - (80/100)
When it rains, and you don’t know why, you have several options. First, you could simply not ask why—not follow up on the question, or never think of the question in the first place. This is the Ignore command, which the bearded wise man originally selected. Second, you could try to devise some sort of explanation, the Explain command, as the bearded man did in response to your first question. Third, you could enjoy the sensation of mysteriousness—the Worship command.
“How long has this leftover food been sitting in the fridge?” I ask.
His voice drops to a whisper. “From the before time. From the long long ago.”
Science As Curiosity-Stopper - (85/100)
Look at yourself in the mirror. Do you know what you’re looking at? Do you know what looks out from behind your eyes? Do you know what you are? Some of that answer Science knows, and some of it Science does not. But why should that distinction matter to your curiosity, if you don’t know?
Not much to say about this one, other than it advises one to be curious.
Truly Part of You - (90/100)
As Donald Davidson observes, if you believe that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about “beavers” is not right enough to be wrong.2 If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? Wittgenstein: “A wheel that can be turned though nothing else moves with it, is not part of the mechanism.”
Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.
You should be able to regenerate a missing part of your model from the rest of it if it were deleted.