So my next thought on the Blogging A to Z Challenge for 2026 was, if I failed at that challenge but succeeded at my National Novel Writing Month challenge to write 50,000 more words on THE LEGACY OF THE EXTRA CREDIT PROJECT, what would happen?
Why, I'd notch one more victory on the above diagram. Specifically, that dark line going down and to the right would tilt up until it intersected the convergence of lines above - which would put me at something like 290,000 words on LEGACY OF THE EXTRA CREDIT PROJECT.
Nanowrimo, for those just joining this blog, is a challenge to write 50,000 words of a new novel in the month of November. It expanded into "Camp Nano" challenges in April and July, and I personally, since I write long novels, use it to add words to manuscripts in process.
About ten years ago, I started doing the Camp April and July Nano's, since I wanted to finish my books before I die, and last December, I started doing the challenge full time until the end of the LEGACY OF THE EXTRA CREDIT PROJECT ... which is turning into a damn trilogy.
So far, approximately 10% of the words I've ever written in Nano were on this one project. I have a lot of work to do - I'm just finishing Novella 5 out of a project 10-12 novellas, and boy does it need editing - but I'm very proud of some of the work that I've done here.
So if you don't see me blogging, that's because I'm writing, drawing or coding.
Mostly writing.
-the Centaur
Pictured: My "the reason you're doing this" shelf - a collection of genre toys and personal keepsakes I use to remind me of why I work. Also pictured: my Nano yearly stats for the past quarter century or so, and the plaque I got from the now-defunct Nano organization when I cracked 2 million words.
So, recently, I asked myself the question: what would be the outcome if I failed to complete Blogging A to Z in April of 2026, but I successfully paid my taxes and filed for an extension in a timely fashion?
The answer: I'd have to pay less of a penalty for NOT getting my taxes done on time.
Having worked in a startup, at a large company, as a contractor, and between me and my wife owning five small businesses, our taxes are ... complicated. Add to that many of the people we interact with not getting us forms until I've already left to attend the Game Developers Conference, I get them done late every year.
Every year, even though I have a formalized system for collecting receipts, and a structured spreadsheet for collating information, there's SOME damn thing that requires me to spend hours extracting information from one program or website and reformatting it into tax-friendly data in an another.
I thought I'd be done three weeks ago. Then last week. Then this weekend, where I set aside all of Sunday to "finish up". At 3AM or something, I gave up, deciding to scan the last forms in the morning.
BUT, this time I was right. There were only three or four things left to scan, the scanner worked, and the data was successfully uploaded to our accountant. We'd expanded our spreadsheet last year to enable me to compute what we owe - paying your tax is a prerequisite for filing an extension.
We paid the tax. Our accountants filed the exception. And then I went out to an impromptu dinner with friends, where we talked about writing.
It was a pretty good day.
-Anthony
Pictured: A custom cocktail at Select restaurant - a mezcal Old Fashioned, I believe.
Postscript: I forgot to say, Blogging A to Z will resume TOMORROW, after I'm caught up on other stuff.
Another key concept that I think is critically important for science and life is "getting traction." A lot of things we do as humans simply don't get us anywhere - for example, most work in philosophy. That may sound like I'm being snarky, and maybe I am, but it's a common trope that we've been discussing things like free will, the nature of time, and Zeno's paradox for thousands of years with no real resolution.
But the problem is that, contra Immanuel Kant, philosophy cannot be reduced to an enterprise that tries to answer "What can I know?" "What should I do?", "What can I hope?" and "What is a human being?" - though those questions are critically important to philosophy. Similarly, contra Ayn Rand, philosophy cannot be reduced to "Where am I?" (metaphysics), "How do I know?" (epistemology), and "What should I do?" (ethics) - though these disciplines are critically important to philosophy.
No, philosophy's job is to map the options of thought. Perennial questions like free will remain perennial because there are many ways to think about the problem and a responsible philosopher won't just attempt to "solve" it, they'll outline the different ways that we can think about it (as Daniel Dennett tried to do in Elbow Room: The Varieties of Free Will Worth Having). Like Saint Thomas Aquinas, I believe that you have free will whether you want it or not - though my argument is based on the Halting Problem - but even Aquinas admits that if your definition of free will excludes the possibility of a mechanism by which the will works, then he can't help you. So even if we reached a definitive answer to the question of free will eight hundred years ago, modern treatments cannot resist revisiting the entirety of the argument.
Leaving us feeling like we're getting nowhere.
To make progress, we need some way of moving on - some way of selecting an idea as the right one. And that can't happen from within philosophy itself - not just because I argue that "solving" isn't it's job, but because of a deeper problem that Ayn Rand calls the Primacy of Consciousness Fallacy - the idea that ideas are more important than reality. The way we think about problems does not change what is. For example, the Ship of Theseus is a famous "thought experiment in identity metaphysics" (according to Vision in the Marvel Universe) about a boat whose timbers are replaced one by one until nothing of the originals remain, raising question: is it the same boat or not? There are strong reasons to say that is, and that it isn't - but those are just options for thinking about it. It doesn't change the actual physical nature of the boat.
To get anywhere with these questions, we need to get evidence. To take a hypothetical example, if we were in a horror movie, and the fully-gutted Ship of Theseus started chasing people down to reclaim its lost timbers, we might start to suspect that it was, indeed, the same ship. Conversely, if we were in a science fiction movie, and no-one who went through a transporter ever remembered who they were, we might start to suspect that their identity was not preserved, and that a matter-energy scrambler was not a good way to transport people from point A to B no matter how much money it saved on the show's budget.
But these are hypotheticals. To really get anywhere with a real question - to get traction in the space of ideas that moves us from a set of options on to a definitive answer - you need more than an argument that convinces yourself; you to start looking for ways to get evidence that distinguishes between the options, evidence that can be shared with other people, or replicated by them, to help them make the same move.
You can see this clearly when looking at the philosophy of general relativity, which explores staggeringly speculative concepts like thunderbolts (fractures in spacetime that spread at the speed of light) and supertasks (performing infinite tasks like computing the digits of pi in one part of spacetime and reading them off in another, dilated part of spacetime, hoping to find that elusive last digit). These questions involve scenarios we can't set up and tasks we cannot perform, and it's difficult to see how they could be resolved.
But these mental explorations help us understand what directions to take in our scientific explorations. The philosopher Mach wondered whether a rotating object in an empty universe could really be said to spin. It's a challenge to set up an entire universe just to answer a hypothetical - but Mach's exploration of the problem helped Einstein formulate his theory of general relativity, which in turn had consequences that were tested the scientist Eddington in a famous expedition. Eddington traveled to photograph a solar eclipse, which showed that starlight around the sun was bent the way Einstein predicted - in turn, giving us a probable answer to Mach's question that, yes, the object would rotate with respect to itself.
Getting traction is an important part of not just science but our everyday lives. I always get suspicious when I go to the doctor and they purport to make a diagnosis without running tests to verify whether they're right. Once, when my arm was broken and the bone plate was slow to heal, I went to a parade of doctors who failed to resolve the problem over a 2 year period. Doctors at the SOAR group ordered a CAT scan, identified a gap in the bone, and scheduled an exploratory surgery, during which they found a suture left from the original surgery that had caused a bulge in the bone and the appearance of a gap. My arm was fine, and likely had been fine for 2 years - but the other doctors didn't find this out because they didn't run the test.
The necessity of getting traction is why, in programming, I hate nondeterministic builds (where sometimes it works and sometimes it doesn't) and hate debugging heisenbugs (where sometimes it fails and sometimes it doesnt). Stochastic failures - failures which happen randomly - lead you to trying things over and over again, hoping to get different results. Doing something again and expecting different results may not be the definition of insanity, and Einstein certainly didn't say it, but it's not great, and it trains you to flail.
Once I encountered this as a real debugging issue - resolving a problem with a robotic device driver for a lidar sensor (a laser radar, used to tell how close objects were to the robot). I was frustrated and thrashing with non-repeatable bugs in my program, and eventually cracked out the manufacturer's diagnostic program to see if I had a bad sensor. But the manufacturer's diagnostic also had the same problems, on more than one lidar unit, and I realized that correctly working sensors of that make and model were actually unreliable when connected to the computer we were using!
So how did I get traction when I literally couldn't trust the data coming from the sensor?
With a spreadsheet.
For each variant of the program that I tried - the original, and various fixes - I ran the program ten times, counted the successes and failures, and entered them into my spreadsheet. It very quickly became apparent that the original program almost never, whereas the best of my fixes worked seventy percent of the time. Since our experimental robots frequently needed to be rebooted multiple times on startup to fix other race conditions, we had no problem shipping "seventy percent success" as an improvement over ten percent.
Getting traction is a key part of science, engineering, and life. We can even apply it to philosophy, if we ask ourselves whether there are actual facts that help us choose between the options, or whether there are values that we hold that lead us to prefer one option over the other. In fact, many of the best philosophers produced their greatest work by taking definitive stands on one or more philosophical questions and then pursuing the implications rigorously. Some would even argue that modern physics is a kind of natural philosophy which took the stance of materialism to its logical conclusion - and then started producing fantastic empirical results by building on that stance.
So what problems in your life could you improve on if you found a way to push off from where you are?
-the Centaur
Pictured: We're fixing our roof, so we have to protect our floor. This floorpaper is actually to help our interior repair team move equipment without damaging our hardwoods, and does not have anything to do with traction, regardless of whether it looks like it's something used for that purpose.
What is science? I think about this a lot. Before I broke my notebooks apart into fine-grained projects, I used to keep an entirely separate series of notebooks just for science, as opposed to my writing and sketching, and in each science notebook I'd attempt to redefine exactly what I thought science was.
So it might surprise you that I stopped listening to a psychology audiobook when the professor said (paraphrased): "Wilhelm Wundt defined psychology as the science of mind, thus dooming it to failure, because science is based on the study of phenomena that are public, repeatable and measurable."
I literally hit eject a few seconds after hearing that, and said aloud (paraphrased), "Now, this is what people mean when they say people in the soft sciences are trying too hard to emulate the hard sciences without understanding how the hard sciences work." (There may have been a few curse words in there as well.)
Now, I said I stopped listening a few seconds later - right around the point where the professor said "psychology is defined as the study of behavior" because if the professor had that poor a grasp of what science is and of how his own science is defined, how could I trust anything else that he would say?
My friends in science, psychology is defined as the study of mind and behavior. There are plenty of well-formed psychology experiments that can be conducted that have no overt behavior at all - for example, whether a visual image causes signs of recognition in the brain as detected by a brain scanner.
Furthermore, whether something is public or not has nothing to do with with whether it can be the subject of science. Quarks are not public - they only exist in bound states - and if our theories of quarks are correct, their properties cannot be directly measured, only inferred from the behavior of particle aggregates.
Furthermore, whether something is repeatable or not is not a measure of whether it is scientific. That's an empirical question. We have reason to believe that the phenomena of genetic inheritance, thermal noise in materials, and radioactive decay are fundamentally random, and can only be predicted in the aggregate.
So what is science? Richard Feynman was fond of saying that science meant "the sole test of any idea is experiment" and I love modifying that to say "the sole test of any idea open to observation is experiment" because observation crystallizes the key distinction between mathematics, science, and speculation.
This brings us to the part I like most about the "public, repeatable, and measurable" part of the professor's definition, because while the subject of a science may not be measurable, there certainly has to be something that's actually observable, or you're dealing with mathematics or metaphysics.
And that brings us, in turn, to FOOM.
My definition of science is that it is the formal observational / operational method (FOOM) of bringing phenomena in the world in contact with modeling and experiment so those phenomena can be explained and our understanding of the phenomena in the world can be expanded. What does this mean?
Well, first off, the scientific method is formal. You may become like Thoreau and retire to a cabin to suck the marrow out of life, and you may learn a lot by doing that, and you might even write it down in a famous book that transforms many people's lives - but that's recounting your experience, and is not science.
The first step in science is observation of the world - actually, direct observation of phenomena in the world like Thoreau did - but what starts to transform it into science is formality. Formal means we bring to observation some kind of structure which enables us to collect a raw body of facts about a phenomenon.
For example, sleep researcher J. Allan Hobson recorded a dream journal over decades to start collecting a body of data that occur in dreams. Without a comparable body of data, it's impossible to say what "typically" happens in a dream and we're reduced to sharing anecdotes.
But of course, the dream journal of one particular person who happened to be a sleep researcher might not be typical - and that's where the operational part of the method. If the dream journal seemed to produce useful data, then researchers can start to develop protocols that analyze them more rigorously.
Formal observations that have been operationalized have been embedded in a set of procedures which enable the observations to be collected reliably - at which point we start calling them measurements (or more broadly evidence, if the observations are not very number-like, for example, narratives).
Science doesn't start with physicists building a two-mile-long particle accelerator to measure the charge of the electron. It starts(ish) with Ben Franklin doing some damfool thing with a kite, string, key and bottle, progresses through Michael Faraday having to instruct people on how to interpret his magnetic induction experiments, and ends with the discovery of the electron by J. J. Thompson a century and a half later.
By the time that happened, the operational methods were so good that Thompson got a read on the mass / charge ratio of the electron when he isolated it. But here's the thing: the electron itself, that is, the particle, was a discovery, not at all apparent in Franklin's time - when electricity was thought of as a fluid.
Now we have principled reasons to believe, based on public, repeatable measurements, that electrons are fundamental in the same sense that private, quantum mechanical, unmeasurable quarks are fundamental, and in the same sense that composite objects like protons, atoms, and psychology professors are not.
But we didn't get there by starting off with phenomena that were public, repeatable, and measurable. We started off wonding about a phenomenon as unpredictable as lightning, and by a centuries-long process of creating formal methods to turn observations into operationalizable measurements - which we used to craft experiments to explore the phenomena we discovered along the way, no matter how weird they were.
That's why I say that the formal observational / operational method, which I call "FOOM" in my conceptual lexicon, is the foundation we need to lay in order to subject the ideas we have to experiment. And that difficult process of reducing chaos to order while being open to surprises, to me, is the essence of science.
-the Centaur
Pictured: the Physics section of Moe's Books in Berkeley
Egalitarianism: all people are people, and deserve equal treatment under the law:. Egalitarianism is the foundation of civilized society; without it, there are no standards to which appeal can be made, and what you have instead is not civilization, but institutionalized barbarism.
That's why, to me, egalitarianism is one of the most important principles after reason and benevolence. (You'll note I didn't say "rationality" there, because in my conceptual lexicon, logic, rationality, and reason each refer to three increasingly sophisticated ways of thinking, and for most problems, rationality just doesn't cut it. But to see why, we'll have to wait until we get to the Ls or Rs). Even if we are making good choices, with good intent, if the system does not apply those to all people equally, we are still failing them.
Most of the problems we have in society ultimately come down to failures to implement egalitarianism. Royalty? Bigotry? Misogyny? Corruption? Oligarchy? Communism? Ultimately, all of these tools of oppression come down to the basic principle that there's one special group of people - a family, a race, a gender, an in-group, power-brokers, a party - who is ideally suited to making the rules for everyone else, and once that is established, money and power quickly start getting sucked into those old vampires.
This is why another concept that I'm fond of, "authorial endorsement," is relevant to a famous science fiction story, "Harrison Bergeron" by Kurt Vonnegut. In the story, everyone in the United States is "finally equal" in the far future because the United States Constitution dictates no-one can be better than anyone else: pretty people have to wear ugly masks, strong people have to wear weights, smart people have to wear concentration-destroying devices, and so on, and so forth, ad absurdum.
People who should know better claim this story is something called "satire," and Vonnegut himself liked to pretend that his story didn't mean exactly what it seemed to mean, but for the rest of us, the story endorses the conclusion that equality under the law means equality of outcomes. Typically, that either appeals to the bigot in you who's offended by the idea that the law should treat everyone equally - and the people who I knew growing up who liked that interpretation of the story did indeed grow up to be bigots - or else you quickly realize that the story is aggressively missing the point of egalitarianism.
Equality under the law can't mean equality of outcomes. It can't. Not everyone starts in the same place; it isn't even possible to define a uniform frame of reference from which everything could be viewed in the same way. The rules of relativity are inescapable. The only way to ensure equality of outcomes under the law is to treat people differently if they start in different places. And that's not egalitarianism.
We need one law for all people. We need to treat all people as people. That means both trying not to enshrine differences and not to erase them; it means both trying not to privilege one group of people nor trying to erase others. It's fricking hard. But it's what makes our civilization a civilized place for everyone.
Some people don't like it. I know quite a few. Many of those seem offended if the world simply contains people different than they want to see, and some of them even seem outraged if the world makes reasonable accommodations for people whose needs are different. Trying to pretend different people all have the same needs, or that we can ignore people who are minorities, also is not egalitarianism: it's putting your thumb on the scale so that some "default" group gets most of the resources.
Under governments powered by tax dollars, egalitarianism involves not taking too much from anyone so that they can't live, taking more from those who can give more without constraining their freedom of action, and giving to people based on their needs, not on their membership in a privileged group. Sometimes that means giving from the wealthy to help the needy; sometimes a particular needy person can't get a break or a wealthy person gets a break that they don't need, because that's the way the law works out, and unless it's a matter within our personal discretion, we can't put our thumbs on the scale. Again, it isn't easy: we just have to keep trying and trying again until we get the system right ... or find that new exception.
But trying to treat all people like people is what makes our civilization worth living in.
"Discretion" is sometimes defined as "the freedom to decide," as in "a judge exercising their discretion" or the related sense of "speaking with care," as in "a confidant's discretion can be relied upon." These are closely related, in my mind, to "discernment", the ability to judge well, a word which has been co-opted in Christian circles to refer to examining things without immediate judgment to obtain spiritual guidance.
But when I mean discretion, I mean taking each situation case by case and applying one's best judgment without relying on pre-decided rules, as a method for dealing with the inevitable limitations placed on us by Godel's Incompleteness Theorem - or, in plain English, exercising your judgment because rules will fail you.
A theorem is something that's always true whether we want it to be true or not. "Two plus two is four", believe it or not, is a theorem, communicating the idea that A(S(S(0)),S(S(0))) - in English, "plus two two" - is S(S(S(S(0)))) - in English, "four" - because of the definition of A(,) - in English "plus". There are times when the theorem isn't appropriate - for example, trying to "add" merging clouds - but you cannot escape it.
The fancy-sounding concept "Godel's Incompleteness Theorem" is a theorem, and in English it means that rules will always fail you by being wrong or incomplete. Its formal statement is about the "incompleteness" of any system complex enough to do arithmetic, and its unprovable consistency. The mathy version of it runs a dozen pages, but shelves upon shelves of textbooks have been written on its implications.
But in practical terms it means that no matter how complex the set of rules you create, either that system must inevitably fail to cover some case, or it must contain mistakes, or it must be so trivial as to be useless. Which means that no one - no priest nor politician nor administrator nor ordinary people trying to manage their own lives - can come up with a set of rules that will always work.
That means we must always exercise our discretion. This is a dangerous thing. Christian theologians love to argue that people love to rationalize, to come up with explanations that justify their misbehavior; but this does not prevent the rules those theologians come up with from failing.
I myself am fond of saying that in a world with imperfect information, decisions cannot be made reliably based on the information that we have in front of us, and that we have to rely on policies that extend beyond those immediate situations; but even those policies may inevitably fail.
But the possibility of failure does not absolve us from the responsibility of trying. To do the best we can in the world, we need to think back - and think ahead - and come up with the best rules that we can, so we don't get fooled by our own desires or the appearance of the situation in the moment; but in the moment, we must also apply our discretion, keeping a careful eye out for conditions that undermine the assumptions behind our clever rules and force us back to the drawing board for a new look.
This process of exercising discretion is fundamentally human. I don't mean the emotional statement "oh, this is a basic part of the human experience" - though it is that - but actually a more technical statement of how human cognition works: it's a part of how we think called universal subgoaling and chunking.
Normally when we think we're actually deploying many learned rules extremely swiftly to make progress, an experience of flow that we find effortless. But when the cognitive engines we call our "minds" reach an "impasse" where we don't know how to move forward on our goals, we generate new "subgoals" to resolve those impasses, marshalling all the knowledge we have to try to solve the problem. It's a difficult, effortful process, prone to failure; but if we do succeed, our brains store this solution as a new "chunk", a new if-then rule which we can use to think more swiftly and effectively in the future.
[As an aside, one of the actual differences between modern "AI" and human thought --- or, more properly, between modern LLMs and so-called "cognitive architectures" modeled on actual human thinking --- is that the LLMs are explicitly not set up to do this. Their learning process is much more akin to acquiring a lot of crystallized rules, or to manipulating those rules in a limited workplace in something akin to subgoaling, but they generally are not set up to do chunking. In a way, we don't want them to; we don't want chunks from my chat session leaking into your session, giving you my answers. But diving into how almost every critique you've ever heard of modern "AI" is a load of dingo's kidneys would be too much of a digression.]
In a sense, we as people and systems are often not as smart as our own brains trying to solve problems, relying too much on fixed rules, societal norms, past traditions, and unjustified feelings than our own brains, which have the advantage of being able to immediately tell whether their if-then rules are failing to give them the answers we need (whether those are the right answer is another question). It takes a deliberate effort to make sure we're not running on autopilot, and all too often, we stick to the rules for no reason.
Don't do that. Look at the situation; exercise your discretion.
You, and the world, will be better off if you do.
-the Centaur
Pictured: Discretion is the better part of valor when spending a vacation with my wife in a town with a lot of good vegan food options. After several days of overeating ... I had a salad for dinner tonight at Craft Roots, because I knew my wife was going to order chocolate mousse with ice cream for dessert.
SO I was looking at the rules of the Blogging A to Z challenge and came to interpret it to to mean that all the posts should be organized around a topic. Reading the rules more closely, I don't think that's the case: "You don't have to change your format of what you normally write, just come up with topics that correspond with the letter of the day." Regardless, I know some people come up with a unifying theme, and I did so:
My conceptual library - or, more particularly, conceptual library curation.
Many great thinkers had to develop their own language to help them articulate their ideas - Immanuel Kant, Ayn Rand, and so on. I don't know that I'm a great thinker, but I frequently find myself relying on a private vocabulary of ideas that help me understand the world. Some of these I've gotten from other people - like "autistic inertia" and "bullshit" - whereas others, like the "Gaimannian Landscape" and "value collapse" are my own inventions.
Others, unfortunately, I can't share - such as the ideal C entry for today, a phenomenon we might call "prestranglulation," or strangling a project by drowning it in unnecessary prerequisites. You'll note that's not the actual word, which starts with a C - but the private word I use for prestrangulation is based on the name of someone I know who does it, and, out of respect, I'm NOT going to shame them publicly by coining a term based on their name and blogging about how bad that behavior is.
Instead, you get this post, about the importance of articulating your own conceptual library, acknowledging or tracking down where those concepts came from, and challenging those concepts periodically to make sure they still make sense.
Some of my most cherished ideas don't work. For example, one idea I picked up is that "you shouldn't critique during a brainstorming session". As it turns out, this idea, while it goes back far in brainstorming research, is at least partially bunk - totally off the wall ideas can derail brainstorming so a limited amount of criticism can actually be helpful. Other ideas I've had on my own similarly didn't stand up for scrutiny.
One way that you can challenge your own ideas is to name them, to attempt to define them more precisely, and once you've done so, start seeking evidence that supports them - or contradicts them.
Contra what you may have heard from naive takes about the scientific method, a scientist should not start their investigation by trying to prove an idea wrong. First you have to have SOME evidence that an idea MIGHT be right, or you'll end up wasting your time trying to refute every idle speculation that you have.
But, conversely, you are the easiest person to fool, and once you have an idea that you think might be true, it's easy to get caught in confirmation bias, where you only look for confirming evidence and don't look for evidence that contradicts your view.
So, as part of that exercise, i hope to spend a little time this month not just blogging ideas, but subjecting them to a little bit of criticism.
-the Centaur
Pictured: birbs, at Point Lobos, who happened to make a shape like a "C".
One of the worst things in the world - not the things that feel bad to me, but things that are bad for others - is the pernicious phenomenon of bullshit - and I don't mean crap from a bull, but the kind of crap that comes out of people's mouths when they're trying to sell people a load of crap.
This kind of bullshit is a particularly pernicious kind of lying - a kind of lying so bad that philosophers aren't even sure that it's lying at all. A liar, after all, envisions a model of a world better for them than the one we live in, and deliberately tries to falsely impress that model into in the mind of their hearers.
But a bullshitter doesn't care about true or false at all: they just care about creating an impression. I recall running into a bullshitter at a friend's party once who claimed "there are no Native American vegetables" and when I later came back with a list (it's a long list) he blew this off as irrelevant.
Because he wasn't concerned with the truth. He was concerned with holding court. He was a loud, showy, know-nothing know-it-all, who was constantly trying to find ways to dominate the conversation at this particular social grouping. He didn't care about the facts - he just cared about being the center of attention.
I didn't care to go to too many of those parties. :-)
It should be obvious that bullshit has corrupted American politics. While both ends of the political spectrum can fall victim to it, our current leadership is bullshitting dangerously about everything from the legal justification for their illegal actions to the strategy behind their irresponsible wars.
And the bullshitters I know personally have given away the game on this. They have repeatedly said things like, "the only reason you're raising that objection is that you oppose what I'm trying to do". No, no, my friend, you have it backward: we're opposing what you want to do because of those objections.
We do not live in a world defined by different movies running in different people's heads.
We live in exactly one shared world, where there are facts to matters to which appeal can be made - exactly one shared reality which we cannot fake in any way whatever, and if you try, sooner or later, it will bite you.
-the Centaur
Pictured: A path in our yard choked by invasive succulent plants. They grew from cuttings which we got from local plants that thrived in our dry climate; we didn't know they were invasive when we planted them. I guess they showed us. Invoke what symbolism you can from this about bullshit in public discourse.
Two of the "worst things in the world" for me are writer's block and autistic inertia. These aren't objectively the "worst things," like panic attacks, ear infections or failure to use the Oxford comma, but they are some of the things that feel the worst to me in the moment.
Writer's block, the inability to write, can get so bad it can drive people to suicide - notably, Ernest Hemingway - and I myself remember lying on the floor of a research office at Google for hours, unable to start work on with a paper I wanted to write, knew how to write, and had already written the outline for.
I eventually wrote that paper (and if I recall correctly, it was published here) but it is true to this day that I can be writing gangbusters on one project (240,000 words on Legacy of the Extra Credit Project) but can get completely stymied on switching gears to another (such as a Prosocial Robotics paper back in January).
That's why, for me, I suspect that writer's block is a subspecies of autistic inertia. Autistic inertia is a phenomenon documented in the autistic community where people "on the spectrum" like me have a marked difficulty starting or stopping tasks.
For example, blogging.
For me, this applies not to just technical things, like writing scientific papers, but to anything that involves interacting with people, like social media or even just sending emails. Recently, I had trouble sending out social media posts for the Seventh Annual Embodied AI Website even though I'd already drafted the text.
That increased social factor makes me suspect that my autistic inertia is also tied to my social anxiety disorder - that weird miscalibration that I have which makes many simple social situations difficult to initiate and stressful while they're happing.
Regardless of the cause, I often find myself unable to start tasks that I want to start, or unable to stop work on something that I feel that I should put aside. This can mean that one task, like, say, writing a novel, can steamroll a variety of other tasks, like, say, blogging.
So I thought I'd share that: one way to overcome the worst thing in the world is to find a structure that forces you to get onto the path of conquering it.
-the Centaur
Pictured: the bathroom in our San Jose home, which we had renovated during the pandemic, and my wife painted after the pandemic, but which we dallied for a long time on installing towel racks. On our most recent trip out here, I got tired of stacking bath towels atop the toilet (gross!) during the shower, and forced myself to (a) track down the fixtures we used in the bathroom (discontinued!) (b) seek out an alternative (c) buy them and (d) install them before my wife was scheduled to arrive. And when I was done, I asked ...