Press "Enter" to skip to content

Posts tagged as “Philosophy”

[blogging a to z 2026]: g is for getting traction

centaur 0

Another key concept that I think is critically important for science and life is "getting traction." A lot of things we do as humans simply don't get us anywhere - for example, most work in philosophy. That may sound like I'm being snarky, and maybe I am, but it's a common trope that we've been discussing things like free will, the nature of time, and Zeno's paradox for thousands of years with no real resolution.

But the problem is that, contra Immanuel Kant, philosophy cannot be reduced to an enterprise that tries to answer "What can I know?" "What should I do?", "What can I hope?" and "What is a human being?" - though those questions are critically important to philosophy. Similarly, contra Ayn Rand, philosophy cannot be reduced to "Where am I?" (metaphysics), "How do I know?" (epistemology), and "What should I do?" (ethics) - though these disciplines are critically important to philosophy.

No, philosophy's job is to map the options of thought. Perennial questions like free will remain perennial because there are many ways to think about the problem and a responsible philosopher won't just attempt to "solve" it, they'll outline the different ways that we can think about it (as Daniel Dennett tried to do in Elbow Room: The Varieties of Free Will Worth Having). Like Saint Thomas Aquinas, I believe that you have free will whether you want it or not - though my argument is based on the Halting Problem - but even Aquinas admits that if your definition of free will excludes the possibility of a mechanism by which the will works, then he can't help you. So even if we reached a definitive answer to the question of free will eight hundred years ago, modern treatments cannot resist revisiting the entirety of the argument.

Leaving us feeling like we're getting nowhere.

To make progress, we need some way of moving on - some way of selecting an idea as the right one. And that can't happen from within philosophy itself - not just because I argue that "solving" isn't it's job, but because of a deeper problem that Ayn Rand calls the Primacy of Consciousness Fallacy - the idea that ideas are more important than reality. The way we think about problems does not change what is. For example, the Ship of Theseus is a famous "thought experiment in identity metaphysics" (according to Vision in the Marvel Universe) about a boat whose timbers are replaced one by one until nothing of the originals remain, raising question: is it the same boat or not? There are strong reasons to say that is, and that it isn't - but those are just options for thinking about it. It doesn't change the actual physical nature of the boat.

To get anywhere with these questions, we need to get evidence. To take a hypothetical example, if we were in a horror movie, and the fully-gutted Ship of Theseus started chasing people down to reclaim its lost timbers, we might start to suspect that it was, indeed, the same ship. Conversely, if we were in a science fiction movie, and no-one who went through a transporter ever remembered who they were, we might start to suspect that their identity was not preserved, and that a matter-energy scrambler was not a good way to transport people from point A to B no matter how much money it saved on the show's budget.

But these are hypotheticals. To really get anywhere with a real question - to get traction in the space of ideas that moves us from a set of options on to a definitive answer - you need more than an argument that convinces yourself; you to start looking for ways to get evidence that distinguishes between the options, evidence that can be shared with other people, or replicated by them, to help them make the same move.

You can see this clearly when looking at the philosophy of general relativity, which explores staggeringly speculative concepts like thunderbolts (fractures in spacetime that spread at the speed of light) and supertasks (performing infinite tasks like computing the digits of pi in one part of spacetime and reading them off in another, dilated part of spacetime, hoping to find that elusive last digit). These questions involve scenarios we can't set up and tasks we cannot perform, and it's difficult to see how they could be resolved.

But these mental explorations help us understand what directions to take in our scientific explorations. The philosopher Mach wondered whether a rotating object in an empty universe could really be said to spin. It's a challenge to set up an entire universe just to answer a hypothetical - but Mach's exploration of the problem helped Einstein formulate his theory of general relativity, which in turn had consequences that were tested the scientist Eddington in a famous expedition. Eddington traveled to photograph a solar eclipse, which showed that starlight around the sun was bent the way Einstein predicted - in turn, giving us a probable answer to Mach's question that, yes, the object would rotate with respect to itself.

Getting traction is an important part of not just science but our everyday lives. I always get suspicious when I go to the doctor and they purport to make a diagnosis without running tests to verify whether they're right. Once, when my arm was broken and the bone plate was slow to heal, I went to a parade of doctors who failed to resolve the problem over a 2 year period. Doctors at the SOAR group ordered a CAT scan, identified a gap in the bone, and scheduled an exploratory surgery, during which they found a suture left from the original surgery that had caused a bulge in the bone and the appearance of a gap. My arm was fine, and likely had been fine for 2 years - but the other doctors didn't find this out because they didn't run the test.

The necessity of getting traction is why, in programming, I hate nondeterministic builds (where sometimes it works and sometimes it doesn't) and hate debugging heisenbugs (where sometimes it fails and sometimes it doesnt). Stochastic failures - failures which happen randomly - lead you to trying things over and over again, hoping to get different results. Doing something again and expecting different results may not be the definition of insanity, and Einstein certainly didn't say it, but it's not great, and it trains you to flail.

Once I encountered this as a real debugging issue - resolving a problem with a robotic device driver for a lidar sensor (a laser radar, used to tell how close objects were to the robot). I was frustrated and thrashing with non-repeatable bugs in my program, and eventually cracked out the manufacturer's diagnostic program to see if I had a bad sensor. But the manufacturer's diagnostic also had the same problems, on more than one lidar unit, and I realized that correctly working sensors of that make and model were actually unreliable when connected to the computer we were using!

So how did I get traction when I literally couldn't trust the data coming from the sensor?

With a spreadsheet.

For each variant of the program that I tried - the original, and various fixes - I ran the program ten times, counted the successes and failures, and entered them into my spreadsheet. It very quickly became apparent that the original program almost never, whereas the best of my fixes worked seventy percent of the time. Since our experimental robots frequently needed to be rebooted multiple times on startup to fix other race conditions, we had no problem shipping "seventy percent success" as an improvement over ten percent.

Getting traction is a key part of science, engineering, and life. We can even apply it to philosophy, if we ask ourselves whether there are actual facts that help us choose between the options, or whether there are values that we hold that lead us to prefer one option over the other. In fact, many of the best philosophers produced their greatest work by taking definitive stands on one or more philosophical questions and then pursuing the implications rigorously. Some would even argue that modern physics is a kind of natural philosophy which took the stance of materialism to its logical conclusion - and then started producing fantastic empirical results by building on that stance.

So what problems in your life could you improve on if you found a way to push off from where you are?

-the Centaur

Pictured: We're fixing our roof, so we have to protect our floor. This floorpaper is actually to help our interior repair team move equipment without damaging our hardwoods, and does not have anything to do with traction, regardless of whether it looks like it's something used for that purpose.

[blogging a to z 2026]: e is for egalitarianism

centaur 0

Egalitarianism: all people are people, and deserve equal treatment under the law:. Egalitarianism is the foundation of civilized society; without it, there are no standards to which appeal can be made, and what you have instead is not civilization, but institutionalized barbarism.

That's why, to me, egalitarianism is one of the most important principles after reason and benevolence. (You'll note I didn't say "rationality" there, because in my conceptual lexicon, logic, rationality, and reason each refer to three increasingly sophisticated ways of thinking, and for most problems, rationality just doesn't cut it. But to see why, we'll have to wait until we get to the Ls or Rs). Even if we are making good choices, with good intent, if the system does not apply those to all people equally, we are still failing them.

Most of the problems we have in society ultimately come down to failures to implement egalitarianism. Royalty? Bigotry? Misogyny? Corruption? Oligarchy? Communism? Ultimately, all of these tools of oppression come down to the basic principle that there's one special group of people - a family, a race, a gender, an in-group, power-brokers, a party - who is ideally suited to making the rules for everyone else, and once that is established, money and power quickly start getting sucked into those old vampires.

This is why another concept that I'm fond of, "authorial endorsement," is relevant to a famous science fiction story, "Harrison Bergeron" by Kurt Vonnegut. In the story, everyone in the United States is "finally equal" in the far future because the United States Constitution dictates no-one can be better than anyone else: pretty people have to wear ugly masks, strong people have to wear weights, smart people have to wear concentration-destroying devices, and so on, and so forth, ad absurdum.

People who should know better claim this story is something called "satire," and Vonnegut himself liked to pretend that his story didn't mean exactly what it seemed to mean, but for the rest of us, the story endorses the conclusion that equality under the law means equality of outcomes. Typically, that either appeals to the bigot in you who's offended by the idea that the law should treat everyone equally - and the people who I knew growing up who liked that interpretation of the story did indeed grow up to be bigots - or else you quickly realize that the story is aggressively missing the point of egalitarianism.

Equality under the law can't mean equality of outcomes. It can't. Not everyone starts in the same place; it isn't even possible to define a uniform frame of reference from which everything could be viewed in the same way. The rules of relativity are inescapable. The only way to ensure equality of outcomes under the law is to treat people differently if they start in different places. And that's not egalitarianism.

We need one law for all people. We need to treat all people as people. That means both trying not to enshrine differences and not to erase them; it means both trying not to privilege one group of people nor trying to erase others. It's fricking hard. But it's what makes our civilization a civilized place for everyone.

Some people don't like it. I know quite a few. Many of those seem offended if the world simply contains people different than they want to see, and some of them even seem outraged if the world makes reasonable accommodations for people whose needs are different. Trying to pretend different people all have the same needs, or that we can ignore people who are minorities, also is not egalitarianism: it's putting your thumb on the scale so that some "default" group gets most of the resources.

Under governments powered by tax dollars, egalitarianism involves not taking too much from anyone so that they can't live, taking more from those who can give more without constraining their freedom of action, and giving to people based on their needs, not on their membership in a privileged group. Sometimes that means giving from the wealthy to help the needy; sometimes a particular needy person can't get a break or a wealthy person gets a break that they don't need, because that's the way the law works out, and unless it's a matter within our personal discretion, we can't put our thumbs on the scale. Again, it isn't easy: we just have to keep trying and trying again until we get the system right ... or find that new exception.

But trying to treat all people like people is what makes our civilization worth living in.

-the Centaur

Pictured: The Old Veteran at Point Lobos, an ancient tree which has weathered many storms.

[blogging a to z 2026]: d is for discretion

centaur 0

"Discretion" is sometimes defined as "the freedom to decide," as in "a judge exercising their discretion" or the related sense of "speaking with care," as in "a confidant's discretion can be relied upon." These are closely related, in my mind, to "discernment", the ability to judge well, a word which has been co-opted in Christian circles to refer to examining things without immediate judgment to obtain spiritual guidance.

But when I mean discretion, I mean taking each situation case by case and applying one's best judgment without relying on pre-decided rules, as a method for dealing with the inevitable limitations placed on us by Godel's Incompleteness Theorem - or, in plain English, exercising your judgment because rules will fail you.

A theorem is something that's always true whether we want it to be true or not. "Two plus two is four", believe it or not, is a theorem, communicating the idea that A(S(S(0)),S(S(0))) - in English, "plus two two" - is S(S(S(S(0)))) - in English, "four" - because of the definition of A(,) - in English "plus". There are times when the theorem isn't appropriate - for example, trying to "add" merging clouds - but you cannot escape it.

The fancy-sounding concept "Godel's Incompleteness Theorem" is a theorem, and in English it means that rules will always fail you by being wrong or incomplete. Its formal statement is about the "incompleteness" of any system complex enough to do arithmetic, and its unprovable consistency. The mathy version of it runs a dozen pages, but shelves upon shelves of textbooks have been written on its implications.

But in practical terms it means that no matter how complex the set of rules you create, either that system must inevitably fail to cover some case, or it must contain mistakes, or it must be so trivial as to be useless. Which means that no one - no priest nor politician nor administrator nor ordinary people trying to manage their own lives - can come up with a set of rules that will always work.

That means we must always exercise our discretion. This is a dangerous thing. Christian theologians love to argue that people love to rationalize, to come up with explanations that justify their misbehavior; but this does not prevent the rules those theologians come up with from failing.

I myself am fond of saying that in a world with imperfect information, decisions cannot be made reliably based on the information that we have in front of us, and that we have to rely on policies that extend beyond those immediate situations; but even those policies may inevitably fail.

But the possibility of failure does not absolve us from the responsibility of trying. To do the best we can in the world, we need to think back - and think ahead - and come up with the best rules that we can, so we don't get fooled by our own desires or the appearance of the situation in the moment; but in the moment, we must also apply our discretion, keeping a careful eye out for conditions that undermine the assumptions behind our clever rules and force us back to the drawing board for a new look.

This process of exercising discretion is fundamentally human. I don't mean the emotional statement "oh, this is a basic part of the human experience" - though it is that - but actually a more technical statement of how human cognition works: it's a part of how we think called universal subgoaling and chunking.

Normally when we think we're actually deploying many learned rules extremely swiftly to make progress, an experience of flow that we find effortless. But when the cognitive engines we call our "minds" reach an "impasse" where we don't know how to move forward on our goals, we generate new "subgoals" to resolve those impasses, marshalling all the knowledge we have to try to solve the problem. It's a difficult, effortful process, prone to failure; but if we do succeed, our brains store this solution as a new "chunk", a new if-then rule which we can use to think more swiftly and effectively in the future.

[As an aside, one of the actual differences between modern "AI" and human thought --- or, more properly, between modern LLMs and so-called "cognitive architectures" modeled on actual human thinking --- is that the LLMs are explicitly not set up to do this. Their learning process is much more akin to acquiring a lot of crystallized rules, or to manipulating those rules in a limited workplace in something akin to subgoaling, but they generally are not set up to do chunking. In a way, we don't want them to; we don't want chunks from my chat session leaking into your session, giving you my answers. But diving into how almost every critique you've ever heard of modern "AI" is a load of dingo's kidneys would be too much of a digression.]

In a sense, we as people and systems are often not as smart as our own brains trying to solve problems, relying too much on fixed rules, societal norms, past traditions, and unjustified feelings than our own brains, which have the advantage of being able to immediately tell whether their if-then rules are failing to give them the answers we need (whether those are the right answer is another question). It takes a deliberate effort to make sure we're not running on autopilot, and all too often, we stick to the rules for no reason.

Don't do that. Look at the situation; exercise your discretion.

You, and the world, will be better off if you do.

-the Centaur

Pictured: Discretion is the better part of valor when spending a vacation with my wife in a town with a lot of good vegan food options. After several days of overeating ... I had a salad for dinner tonight at Craft Roots, because I knew my wife was going to order chocolate mousse with ice cream for dessert.

[blogging a to z 2026]: c is for conceptual library curation

centaur 0

SO I was looking at the rules of the Blogging A to Z challenge and came to interpret it to to mean that all the posts should be organized around a topic. Reading the rules more closely, I don't think that's the case: "You don't have to change your format of what you normally write, just come up with topics that correspond with the letter of the day." Regardless, I know some people come up with a unifying theme, and I did so:

My conceptual library - or, more particularly, conceptual library curation.

Many great thinkers had to develop their own language to help them articulate their ideas - Immanuel Kant, Ayn Rand, and so on. I don't know that I'm a great thinker, but I frequently find myself relying on a private vocabulary of ideas that help me understand the world. Some of these I've gotten from other people - like "autistic inertia" and "bullshit" - whereas others, like the "Gaimannian Landscape" and "value collapse" are my own inventions.

Others, unfortunately, I can't share - such as the ideal C entry for today, a phenomenon we might call "prestranglulation," or strangling a project by drowning it in unnecessary prerequisites. You'll note that's not the actual word, which starts with a C - but the private word I use for prestrangulation is based on the name of someone I know who does it, and, out of respect, I'm NOT going to shame them publicly by coining a term based on their name and blogging about how bad that behavior is.

Instead, you get this post, about the importance of articulating your own conceptual library, acknowledging or tracking down where those concepts came from, and challenging those concepts periodically to make sure they still make sense.

Some of my most cherished ideas don't work. For example, one idea I picked up is that "you shouldn't critique during a brainstorming session". As it turns out, this idea, while it goes back far in brainstorming research, is at least partially bunk - totally off the wall ideas can derail brainstorming so a limited amount of criticism can actually be helpful. Other ideas I've had on my own similarly didn't stand up for scrutiny.

One way that you can challenge your own ideas is to name them, to attempt to define them more precisely, and once you've done so, start seeking evidence that supports them - or contradicts them.

Contra what you may have heard from naive takes about the scientific method, a scientist should not start their investigation by trying to prove an idea wrong. First you have to have SOME evidence that an idea MIGHT be right, or you'll end up wasting your time trying to refute every idle speculation that you have.

But, conversely, you are the easiest person to fool, and once you have an idea that you think might be true, it's easy to get caught in confirmation bias, where you only look for confirming evidence and don't look for evidence that contradicts your view.

So, as part of that exercise, i hope to spend a little time this month not just blogging ideas, but subjecting them to a little bit of criticism.

-the Centaur

Pictured: birbs, at Point Lobos, who happened to make a shape like a "C".

[twenty twenty-five day sixty-four]: echoes

centaur 0

SO! I went "outside my circle" today and did something different, and was about to blog about "if you do what you always do, you'll get what you've always gotten" ... but as I started to write, I had this funny feeling that I'd written about that before, and sure enough, I'd blogged about it almost exactly a year ago.

Now, I was outside of my circle today because of Lent - it's Ash Wednesday, and I decided to drag myself out to an Ash Wednesday service at the church I got married at, Saint Peter's Episcopal (the "rapture-ready" church on Hudson Road, complete with to-go box handle on top). That put me in a different physical location than normal, but it took God sending me a firetruck parked in front of one of the restaurants I would have normally fallen back to before I tried a new place - the Lost Cajun, itself part of a chain I'd been to before, but for some reason I ordered something different than normal, and got the amazing blackened catfish dish above which was far better than the things I'd previously tried there.

And, weirdly, my previous "if you do what you always do" post was also right around the start of Lent. So I wonder if there's something about the spiritual earthquake that Lent is supposed to inspire that also had sent me climbing out of ruts and seeking new experiences a year ago - or, whether that experience left echoes of memory that prompted me to try the same thing again this year.

Who knows? It was a good dish of fish.

-the Centaur

Pictured: um, I said it already.

[twenty twenty-four day forty-two]: a new life on the off-world colonies

centaur 0

This is your periodic reminder that we may not be on the moon, but we live in a pretty awesome world, where almost every movie, book or comic book you ever wanted is either available to stream over the air or can be readily shipped to your home, genre toys that once were inaccessible are now readily available, and we can shrink a playable Galaga machine down to the size you can put it on your coffee table.

We've got it good. Don't screw it up.

-the Centaur

[twenty twenty-four day thirty-six]: accepting reality is not denying rationality

centaur 0

One of the most frustrating things reading the philosophy of Ayn Rand is her constant evasions of reality. Rand's determinedly objective approach is a bracing blast of fresh air in philosophy, but, often, as soon as someone raises potential limits to a rational approach - or, even, in the cases where she imagines some strawman might raise a potential limit - she denies the limit and launches unjustified ad-hominems.

It reminds me a lot of "conservative" opponents to general relativity - which, right there, should tell you something, as an actual political conservative should have no objections to a hundred-and-twenty year old well tested physical theory - who are upset because it introduces "relativism" into philosophy. Well, no, actually, Einstein considered calling relativity "invariant theory" because the deep guts of the theory actually are a quest for formulating theories in terms that are invariant between two observers, like the space-time interval ds^2, which is the same no matter how the relative observers are moving.

In Rand's case, she and Peikoff admit up front in several places that human reason is fallible and prone to error - but as soon as a specific issue is raised, they either deny that failure is possible or claim that critics are trying to destroy rationality. Among things they claim as infallible products of reason are notions such as existence, identity, and consciousness, deterministic causality, the infallibility of sense perception, the formation of concepts, reason (when properly conducted), and even Objectivism itself.

In reality, all of these things are fallible, and that's OK.

Our perception of what exists, what things are, and even aspects of our consciousness can be fooled, and that's OK, because a rational agent can construct scientific procedures and instruments to untangle the difference between our perception of our phenomenal experience and the nature of reality. Deterministic causality breaks down in our stochastic world, but we can build more solid probabilistic and quantum methods that enable us to make highly reliable predictions even in the face of a noisy world. Our senses can fail, but there is a rich library of error correcting methods both in natural systems and in in robotics that help us recover reliable information that is useful enough to act upon with confidence.

As for the Objectivist theory of concepts, it isn't a terrible normative theory of how we might want concepts to work in an ideal world, but it is a terrible theory of how concept formation actually works in the real world, either in the human animal or in how you'd build an engineering system to recognize concepts - Rand's notion of "non-contradictory identification" would in reality fail to give any coherent output in a world of noisy input sensors, and systems like Rand's ideas were supplanted by techniques such as support vector machines long before we got neural networks.

And according to Godel's theorem and related results, reasoning itself must either be incomplete or inconsistent - and evidence of human inconsistency abounds in the cognitive science literature. But errors in reasoning itself can be handled by Pollock's notion of "defeasible" reasoning or Minsky's notion of "commonsense" reasoning, and as for Objectivism itself being something that Rand got infallibly right ... well, we just showed how well that worked out.

Accepting the limits of rationality that we have discovered in reality is not an attack on rationality itself, for we have found ways to work around those limits to produce methods for reaching reliable conclusions. And that's what's so frustrating reading Rand and Peikoff - their attacks on strawmen weaken their arguments, rather than strengthening them, by both denying reality and denying themselves access to the tools we have developed over the centuries to help us cope with reality.

-the Centaur

[twenty twenty-four day thirty-three]: roll the bones

centaur 0

As both Ayn Rand and Noam Chomsky have both said in slightly different ways, concepts and language are primarily tools of thought, not communication. But cognitive science has demonstrated that our access to the contents of our thought are actually relatively poor - we often have an image of what is in our head which is markedly different from the reality, as in the case where we're convinced we remember a friend's phone number but actually have it wrong, or have forgotten it completely.

One of the great things about writing is that it forces you to turn these abstract ideas about our ideas into concrete realizations - that is, you may think you know what you think, but even if you think about it a lot, you don't really know the difference between your internal mental judgments about your thoughts and their actual reality. The perfect example is a mathematical proof: you may think you've proved a theorem, but until you write it down and check your work, there's no guarantee that you actually HAVE a proof.

So my recent article on problems with Ayn Rand's philosophy is a good example. I stand by it completely, but I think that many of my points could be refined considerably. I view Ayn Rand's work with regards to philosophy the way that I do Euclid for mathematics or Newton for physics: it's not an accurate model of the world, but it is a stage in our understanding of the world which we need to go through, and which remains profitable even once we go on to more advanced models like non-Euclidean geometry or general relativity. Entire books are written on Newtonian approximations to relativity, and one useful mathematical tool is a "Lie algebra", which enables us to examine even esoteric mathematical objects by looking at the locally at the Euclidean tangent space generated around a particular point.

So it's important to not throw the baby out with the bathwater with regards to Ayn Rand, and to be carefully specific about where her ideas work and where they fail. For example, there are many, many problems with her approach to the law of identity - the conceptual idea that things are what they are, or A is A - but the basic idea is sound. One would say that it almost approaches tautological except for the fact that many people seem to ignore it. However, you cannot fake reality in any way whatever - and you cannot make physical extrapolations about reality through philosophical analysis of a conceptual entity like identity.

Narrowing in on a super specific example, Rand tries to derive the law of causality from the law of identity - and it works well, right up unto the point where she tries to draw conclusions about it. Her argument goes like this: every existent has a unique nature due to the law of identity: A is A, or things are what they are, or a given existent has a specific nature. What happens to an existent over time - the action of that entity - is THE action of THAT entity, and is therefore determined by the nature of that entity. So far, so good.

But then Rand and Peikoff go off the rails: "In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity." It is difficult to grasp the level of evasion which might produce such a confusion of ideas: to make such a statement, one must throw out not just the tools of physics, mathematics and philosophy, but also personal experience with objects as simple as dice.

First, the evasion of personal experience, and how it plays out through mathematics and physics. Our world is filled with entities which may produce one action out of many - not just entities like dice, but even from Rand and Peikoff's own examples, a rattle makes a different sound every time you rattle it. We have developed an entire mathematical formalism to help understand the behavior of such entities: we call them stochastic and treat them with the tools of probability. As our understanding has grown, physicists have found that this stochastic nature is fundamental to the nature of reality: the rules of quantum mechanics essentially say that EVERY action of an entity is drawn from a probability distribution, but for most macroscopic actions this probabilistic nature gets washed out.

Next, the evasion of validated philosophical methods. Now, one might imagine Rand and Peikoff saying, "well, the roll of the dice is only apparently stochastic: in actuality, the dice when you throw it is in a given state, which determines the single action that it will take." But this is a projective hypothesis about reality: it is taking a set of concepts, determining their implications, and then stating how we expect those implications to play out in reality. Reality, however, is not required to oblige us. This form of philosophical thinking goes back to the Greeks: the notion that if you begin with true premises and proceed through true inference rules, you will end up with a true conclusion. But this kind of philosophical thinking is invalid - does not work in reality - because any one of these elements - your concepts, your inference rules, or your mapping between conclusions and states - may be specious: appearing to be true without actually reflecting the nuance of reality. To fix this problem, the major achievement of the scientific method is to replace "if you reach a contradiction, check your premises" with "if you reach a conclusion, check your work" - or, in the words of Richard Feynman, "The sole test of any idea is experiment."

Let's get really concrete about this. Rand and Peikoff argue "If, under the same circumstances, several actions were possible - e.g., a balloon could rise or fall (or start to emit music like a radio, or turn into a pumpkin), everything else remaining the same - such incompatible outcomes would have to derive from incompatible (contradictory) aspects of the entity's nature." This statement is wrong on at least two levels, physical and philosophical - and much of the load-bearing work is in the suspicious final dash.

First, physical: we actually do indeed live in a world where several actions are possible for an entity - this is one of the basic premises of quantum mechanics, which is one of the most well-tested scientific theories in history. For each entity in a given state, a set of actions are possible, governed by a probability amplitude over those states: when the entity interacts with another entity in a destructive way the probability amplitude collapses into a probability distribution over the actions, one of which is "observed". In Rand's example, the balloon's probability amplitude for rising is high, falling is small, emitting radio sounds is still smaller, and turning into a pumpkin is near zero (due to the vast violation of conservation of mass).

If one accepts this basic physical fact about our world - that entities that are not observed exist in a superposition of states governed by probability amplitudes, and that observations involve probabilistically selecting a next state from the resulting distribution - one can create amazing technological instruments and extraordinary scientific predictions - lasers and integrated circuits and quantum tunneling and prediction of physical variables with a precision of twelve orders of magnitude - a little bit like measuring the distance between New York and Los Angeles with an error less than a thousandth of an inch.

But Rand's statement is also philosophically wrong, and it gets clearer if we take out that distracting example: "If, under the same circumstances, several actions were possible, such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." What's wrong with this? There's no warrant to this argument. A warrant is the thing that connects the links in a reasoning chain - an inference rule in a formal system, or a more detailed explanation of the reasoning step in question.

But there is no warrant possible in this case, only a false lurking premise. The erroneous statement is that "such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." Why? Why can't an entity's nature be to emit one of a set of possible actions, as in a tossed coin or a die? Answer: Blank out. There is no good answer to this question, because there are ready counterexamples from human experience, which we have processed through mathematics, and ultimately determined through the tools of science that, yes, it is the nature of every entity to produce one of a set of possible outcomes, based on a probability distribution, which itself is completely lawlike and based entirely on the entity's nature.

You cannot fake reality any way whatever: this IS the nature of entities, to produce one of a set of actions. This is not a statement that they are "contradictory" in any way: this is how they behave. This is not a statement that they are "uncaused" in any way: the probability amplitude must be non-zero in a space in order for an action to be observed, and it is a real physical entity with energy content, not merely a mathematical convenience, that leads to the observation. And it's very likely not sweeping under the rug some hidden mechanism that actually causes it: while the jury is still out on whether quantum mechanics is a final view of reality, we do know due to Bell's theorem that there are no "hidden variables" behind the curtain (a theorem that had been experimentally validated as of the time of Peikoff's book).

So reality is stochastic. What's wrong with that? Imagine a correct version of Ayn Rand's earlier statement: "In any given set of circumstances, therefore, there is only one type of behavior possible for an entity, the behavior expressive of its entity. This behavior may result in one of several outcomes, as in the rolling of a die, but the probability distribution over those set of outcomes is the distribution that is caused and necessitated by the entity's nature." Why didn't Peikoff and Rand write something like that?

We have a hint in the next few paragraphs: "Cause and effect, therefore, is a universal law of reality. Every action has a cause (the cause is the nature of the entity that acts); and the same cause leads to the same effect (the same entity, under the same circumstances, will perform the same action). The above is not to be taken as a proof of the law of cause and effect. I have merely made explicit what is known implicitly in the perceptual grasp of reality." That sounds great ... but let's run the chain backwards, shall we?

"We know implicitly in the perceptual grasp of reality a law which we might explicitly call cause and effect. We cannot prove this law, but we can state that the same entity in the same circumstances will perform the same action - that is, the same cause leads to the same effect. Causes are the nature of the entities that act, and every action has a cause. Therefore, cause and effect is a universal law of reality."

I hope you can see what's wrong with this, but if you don't, I'm agonna tell you, because I don't believe in the Socratic method as a teaching tool. First and foremost, our perceptual grasp of reality is very shaky: massive amounts of research in cognitive science reveal a nearly endless list of biases and errors, and the history of physics has been one of replacing erroneous perceptions with better laws of reality. One CANNOT go directly from the implicit knowledge of perceptual reality to any actual laws, much less universal ones: we need experiment and the tools of physics and cognitive science to do that.

But even from a Randian perspective this is wrong, because it is an argument from the primacy of consciousness. One of the fundamental principles of Objectivist philosophy is the primacy of existence over consciousness: the notion that thinking a thing does not make it so. Now, this is worth a takedown of its own - it is attempting to draw an empirically verifiable physical conclusion from a conceptual philosophical argument, which is invalid - but, more or less, I think Rand is basically right that existence is primary over consciousness. Yet above, Rand and Peikoff purport to derive a universal law from perceptual intuition. They may try to call it "implicit knowledge" but perception literally doesn't work that way.

If they admit physics into their understanding of the law of causality, they have to admit you cannot directly go from a conceptual analysis of the axioms to universally valid laws, but must subject all their so-called philosophical arguments to empirical validation. But that is precisely what you have to do if you are working in ontology or epistemology: you MUST learn the relevant physics and cognitive science before you attempt to philosophize, or you end up pretending to invent universal laws that are directly contradicted by human experience.

Put another way, whether you're building a bridge or a philosophy, you can't fake reality in any way whatsoever, or, sooner or later, the whole thing will come falling down.

-the Centaur

How to Be a Better Writer (the Short Version)

centaur 0

20151207_002854.jpg

Recently a colleague asked me if I had any advice on being a better writer. I thought I’d posted about that, but it appears that I hadn’t, so I tried writing up my thoughts. That was too much, so I summarized. That was too much, so I summarized it AGAIN. And then it was short enough to share with you:

The super short version is to be a better writer, just write!

I often recommend morning pages - writing three pages about random topics at the start of your day, even "bla bla bla" if you have to - you'll get tired of writing “bla bla bla" quickly, and this will help cure you of the feeling you need to wait for your muse.

This advice comes from the book The Artist's Way, which is a great course to take; I also recommend Strunk and White's The Elements of Style and Brooks Landon's Building Great Sentences on grammar and style, Ayn Rand's The Art of Fiction and The Art of Nonfiction on writing and structure, and The Elements of Editing and Self-Editing for Fiction Writers on editing.

I also recommend that you read a lot more than you write, especially writing of the kind you want to emulate; take a look at it and see what makes it tick.

For fiction and other similar writing I recommend finding a writing group first, not a critique group; there are several good ones in the Bay Area including Write to the End and Shut Up and Write.

For the kind of internal communications you're talking about, you might try looking at marketing and documentation literature or the great writers internally that you admire - also popular writers, technical and nontechnical, in the computer field.

As for blogging, my recommendation is to just blog - try to do it regularly, at least once a week or so, about whatever comes to your mind, so that you create both a growing store of content - and again, a habit that helps you just write.


20151207_002839.jpg

I’ll try to expand on these recommendations, but if I had to boil it down even further, I’d say: just write!

-the Centaur

Write Your Own Damn Sentences

centaur 0

IMG_20120701_164742.jpg

Recently I've been reading a lot on sentence construction - in particular the "little books" Mark Doty's The Art of Description: Word into World, Stanley Fish's How to Write a Sentence (and How to Read One), and Bruce Ross-Larson's Stunning Sentences, not to mention essays scattered across half a dozen books. I've enjoyed all this writing on writing, and I think all of it has been useful to me, but, as usual, there's one bit of advice I find myself encountering, find myself willing to take, yet find myself reacting against:

Find examples of great sentences to emulate.

On the one hand, I agree with this: finding great examples of sentences, then deconstructing them, imitating them and attempting to progress past them is a great exercise for writers, one I intend to follow up on (in my copious free time). On the other, focusing on exemplars of great sentences in the past, like it or not, encourages a mindset of focusing on the greatness of writers of the past, idolizing them, and then following in their footsteps.

I'm extremely allergic to the "idolizing the greats" syndrome. There have been greats in history, no doubt: great writers and thinkers, leaders and followers, heroes and villains. And there are people you will encounter that will impact you like no other: prophets whose principles will change your life, philosophers whose thought will change your mind, and authors whose writing will strike you like a physical blow. But they won't affect everyone the same way, and they won't solve your problems for you.

There are no secrets. It's all up to you.

Having said that, let me undermine it by recommending the following book of secrets: First Thought, Best Thought by Alan Ginsberg, Anne Waldman, William S. Burroughs and Diane Di Prima - an audiobook by four authors of the Beat Generation, talking about their experimental methods of poetry. I recommend the Beats because, like the Beats, I feel the need to counteract "conservative, formalistic literary ideals," but unlike the Beats, I don't reject those ideals: I just want more tools in my toolbox.

The Beats don't recommend emulating the past; they recommend finding ways of producing text that violate the norms. Ginsberg used breaths and rhythms. Burroughs cut words and sentences up and pasted them together until he had a whole page of, potentially, gibberish, which he then would mine for gems - perhaps finding a paragraph or even just a sentence out of an entire page of cut-up. Each author had their own method of breaking out of the mold. And a mold breaker … is a tool you can use.

So don't just find sentences to emulate. Write your own damn sentences. Cut up words on a page until they're confetti and rearrange them until they make sense. Build a program that writes random sentences. Throw down Rory's Story Cubes. Try magnetic poetry. Learn rap. Take improv. Stay up all night until you're loopy with sleep deprivation. No matter what crazy ideas you have, write them all down, then winnow through them all and pick the best ones - the ones that hit you like a physical blow.

THEN go back to the tools for sentence analysis from all those little books, and use them to make more of your own.

Seriously, what do you have to lose? Try the exercise. If you don't like what you produce, you may learn that your inspiration lies in understanding the past and building on it to create something new. If you do like it … you may add something to the world which, while its parts may come from the past, is in its whole ... wholly new.

-the Centaur

Pictured: a truly bizarre photographic composition that occurred by chance, and which I could not have planned if I tried.

Treat Problems as Opportunities

centaur 0

Treat Problems v1.png

Recently I had a setback. Doesn't matter what on; setbacks happen. Sometimes they're on things outside your control: if a meteor smacks the Earth and the tidal wave is on its way to you, well, you're out of luck buddy.

But sometimes it only seems like a tidal wave about to wipe out all life. Suppose your party has lost the election. Your vote didn't stop it. You feel powerless - but you're not. You can vote. You can argue. You can volunteer. Even run for office yourself.

Even then, it might be a thirty year project to get yourself or people you like elected President - but most problems aren't trying to change the leader of the free world. The reality is, most of the things that do happen to us are things we can partially control.

So the setback happens. I got upset, thinking about this misfortune. I try to look closely at situations and to honestly blame myself for everything that went wrong. By honestly blame, I mean to look for my mistakes, but not exaggerate their impact.

In this case, at first, I thought I saw many things I did wrong, but the more I looked, the more I realized that most of what I did was right, and only a few of them were wrong, and they didn't account for all the bad things that had happened beyond my control.

Then I realized: what if I treated those bad things as actual problems?

A disaster is something bad that happens. A problem is a situation that can be fixed. A situation that has a solution. At work, and in writing, I'm constantly trying to come up with solutions to problems, solutions which sometimes must be very creative.

"Treat setbacks as problems," I thought. "Don't complain about them (ok, maybe do) but think about how you can fix them." Of course, sometimes the specific problems are unfixable: the code failed in production, the story was badly reviewed. Too late.

That's when the second idea comes in: what if you treated problems as opportunities to better your skills?

An opportunity is a situation you can build on. At work, and in writing, I try to develop better and better skills to solve problems, be it in prose, code, organization, or self-management. And once you know a problem can happen, you can build skills to fix it.

So I came up with a few mantras: "Take Problems as Opportunities" and "Accept Setbacks as Problems" were a couple of them that I wrote down (and don't have the others on me). But I was so inspired I put together a little inspirational poster.

I don't yet know how to turn this setback into a triumph. But I do know what kinds of problems caused it, and those are all opportunities for me to learn new skills to try to keep this setback from happening again. Time to get to it.

-Anthony

Pictured: me on a ridge of rock, under my very own motivational poster.

P.S. Now that I've posted this, I see I'm not the first to come up with this phrase. Great minds think alike!

An open letter to people who do presentations

centaur 0

presentations.png

I’ve seen many presentations that work: presentations with a few slides, with many slides, with no slides. Presentations with text-heavy slides, with image-heavy slides, with a few bullet points, even hand scrawled. Presentations done almost entirely by a sequence of demos; presentations given off the cuff sans microphone.

But there are a lot of things that don’t work in presentations, and I think it comes down to one root problem: presenters don’t realize they are not their audience. You should know, as a presenter, that you aren’t your audience: you’re presenting, they’re listening, you know what you’re going to say, they don’t.

But recently, I’ve had evidence otherwise. Presenters that seem to think you know what they’re thinking. Presenters that seem to think you have access to their slides. Presenters that seem that you are in on every private joke that they tell. Presenters that not only seem to think that they are standing on the podium with them, but are like them in every way – and like them as well.

Look, let’s be honest. Everyone is unique, and as a presenter, you’re more unique than everyone else. [u*nique |yo͞oˈnēk| adj, def (2): distinctive, remarkable, special, or unusual: a person unique enough to give him a microphone for forty-five minutes]. So your audience is not like you — or they wouldn’t have given you a podium. The room before that podium is filled with people all different from you.

How are they different?

  • First off, they don’t have your slides. Fine, you can show them to them. But they haven’t read your slides. They don’t know what’s on your slides. They can’t read them as fast as you can flip through them. Heck, you can’t read them as fast as you can flip through them. You have to give them the audience time to read your slides.

  • Second, they don’t know what you know. They can’t read slides which are elliptical and don’t get to the point. They can’t read details printed only in your slide notes. They can’t read details only on your web site. The only thing they get is what you say and show. If you don’t say it or show it, the audience won’t know it.
  • Third, they probably don’t know you. But that’s not an excuse to pour your heart and soul into your presentation. It’s especially not a reason to pour your heart and soul into your bio slide. Your audience does not want to get to know you. They want to know what you know. That’s an excuse to pour into it what they came to hear.
  • Fourth, your audience may not even like you. That’s not your fault: they don’t probably know you. But that’s not an excuse to sacrifice content for long, drawn out, extended jokes. Your audience isn’t there to be entertained by you. We call that standup. Humor is an important part of presentations, but only as a balanced part. We don’t call a pile of sugar a meal; we call it an invitation to hyperglycemic shock.
  • Fifth, your audience came to see other people than you. You showed up to give your presentation; they came to see a sequence of them. So, after following a too-fast presentation where the previous too-fast presenter popped up a link to his slide notes, please, for the love of G*d, don’t hop up on stage and immediately slap up your detailed bio slide before we’ve had time to write down the tiny URL.

Look, I don’t want to throw a lot of rules at you. I know some people say “no more than 3 bullets per slide, no more than 1 slide per 2 minutes” but I’ve seen Scott McCloud give a talk with maybe triple that density, and his daughter Sky McCloud is even faster and better. There are no rules. Just use common sense.

  • Don’t jam a 45 minute talk into 25 minutes. Cut something out.
  • Don’t have a 10 minute funny video at a technical conference. Cut it in half.
  • Don’t leap up on stage to show your bio slide before the previous presenter is done talking. Wait for people to write down the slides.
  • Don’t “let the audience drive the talk with questions.” They came to hear your efforts to distill your wisdom, not to hear your off-the-cuff answers to irrelevant questions from the audience.
  • Don’t end without leaving time for questions. Who knows, you may have made a mistake.

Ok. That’s off my chest.

Now to dive back into the fray…

-the Centaur

Pictured: A slide from ... axually a pretty good talk at GDC, not one of the ones that prompted the letter above.

Plotting from the bottom up

centaur 0
piles of books in my library Recently I was asked about how I plot books:
I was wondering if you could help me out a bit. I've always wanted to create my own comicbook from my own design and mind but I always, I mean ALWAYS have problems coming up with and sticking with a good plot. I can make the basis of the story, the characters, the world and different terms and creatures but I can never stick with a plot or make a good one that I know will drive the story. Could you give me any advice on these things or some pointers on how to make a really great story I could draw out? I'm so close to it blossoming I can taste it!
Great question! I'm not sure I'm the best person in the world to answer it - my first pointer to anyone on plot would be Ayn Rand's The Art of Fiction - yes, I know, it's Ayn Rand, but if you're one of those idiots who can't see past your unjustified distaste for her political philosophy, well, then you deserve to miss out on her opinions in other areas which might prove of more value to you despite your disagreements - but I do think about plot quite a bit, so I'll give it a go. A lot of what I do is simply write cool scenes I enjoy ... and then think hard about who's the protagonist and what's their major conflict. Once you know, for example, the protagonist is a magic tattoo artist, that suggests she's going to be in conflict over some tattoo related thing - like someone skinning people who have tattoos. Once you know the conflict, then you can design the climax - well, your tattoo artist will eventually have to meet the evil skinning person, who will want to make her a victim. That basic strategy - write stuff that's fun, figure out who the protagonist really is, find what conflict they're embroiled in, design the final conflict, then work backwards from there - has worked very well for me. Why take this approach, rather than, say, starting with some theme and working back from there. Start with an abstract goal? Yuk! That might work for nonfiction but in fiction it's a recipe for heartless exercises in craft - and craft can't sell a story. The instant someone notices you're telling a story on skill alone, you're done. There are prominent authors I can't read anymore because I realized they had some point they were driving to and were using all of their craft to get me there ... even though there was no reason to go there in the first place. That might work in a movie with a lot of explosions, but it's not going to sustain a 300 page book. So. I need concrete events, realized situations with full-bodied characters where interesting things are happening. In short, I need to be entertained - in my writing most of all. That's why I start with "cool scenes" - I write to entertain myself first, so I have to write what I enjoy writing. But I want others to enjoy it too - someone once said the hallmark of a great writer is that they take what they find interesting and make it interesting to other people. To do that, to make my stories interesting to people not invested in my characters, I need to create a strong conflict that will engage. And to do so, I listen to the story. Whether the story features a tattoo artist accosted by a werewolf deep in the Lovecraftian underbelly of Atlanta - or that same tattoo artist and her adopted weretiger daughter out school shopping in the sun - those first key scenes of the story, those first inspirations, will tell you what belongs in the story. If the story begins with Dakota school shopping with Cinnamon, then some part of the story must hinge on Cinnamon and Dakota in a school - or that scene's got to go. If the story features a magic tattoo artist investigating magic graffiti, then some part of the story must hinge on our tattoo artist confronting the graffiti artist. And for the story to really be interesting, something important must be at stake - generally, life has to be on the line in the kind of melodramatic action adventures I write, but it can be more subtle if you're writing something more subtle. One famous way of looking at this idea is Chekov's Gun - "If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don't put it there." Ayn Rand's take on this is similar: you should decide on your theme (what your story is about), then your plot-theme (what type of events realize your theme), then your conflict (what is being fought over in the plot) then the plot itself (the actual sequence of events) which will then dictate the characters, scenes and settings in your story. I believe in the same causal structure, but prefer the opposite order. I let my subconscious play scenes out I find entertaining, and then let the characters and the situations tell me who they are, what conflicts they encounter, and what themes I should explore. You have to find your own way of doing things, of course; every writer is unique, and it's your unique story and vision that matter. Whatever you have to do - outline or no outline, start from the beginning or write backwards for the end - just do it. Just write, and eventually it will all sort itself out. -the Centaur

What Is Consciousness?

centaur 0
what information is beautiful thinks i think about consciousness infographic on consciousness as functionalism The ever wonderful chaps at Information is Beautiful have put up a beautiful animated infographic of many of the major theories of consciousness. Click on the graphic to the right to see them all ... I'm essentially a functionalist but try to keep an open mind. OK, I can state it more forcefully than that: I believe, and believe I can point to evidence for, that consciousness performs many important functions, and I want to know what they all are, how they work together, and how they relate to the other functions of the brain. If we do build up a solid picture of that, however, it won't surprise me too much if we find interesting phenomena left over that require us rethinking everything we've done up to that point. -the Centaur UPDATE: Ooo, there's even more to the graphic than I thought ... you can click on the brains and get it to produce a composite graphic of what "your" theory of consciousness is.

Some Days You Just Wanna Curl Up In A Ball

centaur 0
gabby just curled up into a ball This isn't a Woe Is Me post about all the crap that's been happening to me recently. That's so last week, literally. This is about depression. I have sporadic bouts of depression, probably just like most other people, nothing serious enough to call clinical. What really strikes me about it is how disconnected mood is from reality. In a large number of ways, things are Much Better Now than they were Just A While Ago. I've delivered my work to my old team (closure), I've moved to a new team doing something fun (robotics), I'm healing up from my illness (wellness), my wife's returned from her trip (companionship), and I have a book coming out (success). But nothing is perfect, and there are little setbacks that happen all the time. Sporadic depression, I find, isn't brought on by nothing, the way clinical depression extends over long periods for no good reason; it gets triggered by one of those little setbacks. When I was down with tonsillitis right before several major deadlines, things like a smashed toe made me upset and angry, and things like work challenges made me frustrated and worn out. Now that things are evened out, you'd think I'd have more resilience. Instead, I found myself having a Surprisingly Shitty Day. Even though I felt better, I was making progress on all my work tasks, at least partially resolved my setbacks, and even made progress on writing and drawing, the depression never let up. Now, I had a setback, as I said, and there are things that would make this situation better. But what interests me is that some of these feelings I felt today - "I wish I was doing something else" and "I'm so tired" and "I can't take it anymore" - I thought were attributable to my previous less-than-ideal situation: working on what I didn't want to work on, under deadline pressure, while sick. I know that's not the case now. I'm working on what I do want to work on. The next deadlines are weeks away and I have no competing pressures. And I'm feeling physically better. Even the setback passed out of my mind. So why am I feeling the same way? I suspect because those feelings are a habit of mind. A response to a challenging situation I've picked up that has become free floating. There are challenges inherent in everything you do, no matter how fun it is - and any bad habits of mind don't care how closely aligned your current work is with your goals, your desires, your attitudes. Your bad attitudes and thoughts are just sitting there, waiting to spring, starting the tapeloop spiral into depression. So what am I gonna do about it? Recognize it, blog it, and move on. I've had many, many cycles of mild maniac / depression in my life, and I didn't start to get better until I recognized it, stopped wallowing it, and moved on. My formerly quick temper had the same solution: notice it's happening, turn the alarm off, and deal with the situation, sometimes cathartically, usually not. That worked so well my wife hasn't ever seen me really lose my temper in eight years of our relationship. If the solution to dealing with anger is not to get angry, is the solution to dealing with depression just not to let yourself get down? To pull out of the situation, relax, do something fun, and tackle it again with your energies renewed? Let's see. Time to kick back, throw on some Who, and chill. -the Centaur lenora sitting as if she's gonna watch some of the teevee

Take Care Of Yourself Before It’s Too Late

centaur 0
Gabby naps, with the sabretooth skull in the background.

I can't even begin to tell you all that I've gone through recently: sleep deprivation, tonsillitis, tinnitus, internal injuries, a trip to the emergency room (unrelated), and near disasters at work. I've started another blog entry to explain what's been going on, but even that had to be put on hold by other disasters.

The quick point I want to pass on is that I work hard sometimes. I used to describe as working two jobs: by day, my work at the Search Engine That Starts With A G, and by night, the author of the Dakota Frost series. Both could take 40 hours a week or more, meaning normally almsot every nonworking minute ends up on writing.

Recently, that's become like four jobs: my old project at the Search Engine, a brand new project at the Search Engine, both with hard and conflicting deadlines, a scientific paper for my new project, also with a hard deadline, and my fiction writing, also with deadlines. Each one could be a full time job. Aaa.

Recently, this came to a head: I'd finished my scientific paper, had a breather on the writing, yet still knew I was going to have to work hard, nights and weekends, just on my two work projects. So I decided one night I needed to take a break, to chill out, to go to bed early and catch up on sleep. To recharge my batteries.

Too late.

That night, when I got home, planning to crash out early, one of my cats urinated all over our curtains, then tracked it through our house, necessitating a 3:45AM cleaning job (cats will urinate after each other unless it is completely cleaned up), just before a Monday at work. The next night I was kept up by a sore throat, was worn out Tuesday, and was diagnosed with tonsillitis on Wednesday. The throat pain caused sleep deprivation, the coughing fits caused hemorrhoids (yuk!), the nasal congestion caused tinnitus and hearing loss in one ear, and all of this indirectly caused my trip to the emergency room (more on that later). This went on for days, then for over a week. And all of this just before a huge presentation at work, which we figured out we needed to cancel much too late to cancel - so I had to keep working, even though I could barely keep working. I couldn't really code in my exhaustion, and when I did readings for my other project - and I did work on my other project, because its deadlines wouldn't stop either - the textbooks actually blurred when I sat down to read them.

It was almost two weeks later, a day after the presentation, when I finally crashed, for essentially 36 hours straight.

So my point, and I do have one, is that you should take care of yourself. Now. While you're still feeling good about yourself. Because if you wait to take care of yourself until you're all worn out ... it may be too late.

-the Centaur

Tricking Yourself Into Doing The Right Thing

centaur 0
Ribeye Steak, Tabbouleh, and Cognitive Neuroscience

Sometimes it's hard to do the right thing. For example, I enjoy eating dinner out. There's nothing wrong with that; but it's always easier to eat out than it is to fix dinner, as I can have high-quality healthy food made for me while I read or write or draw, whereas cooking at home involves shopping, cooking, and cleaning that I'm fortunate enough to be able to pay other people to do (and that through the absurd good luck that the rather esoteric work I was most interested in doing in grad school turned out to be relatively lucrative in real life).

But that's not fair to my wife, or cats, nor does it help me catch up on my pile of DVDs or my library cleaning or any of a thousand other projects that can't be done out at dinner. Sometimes I deliberately go out to dinner because I need to read or write or draw rather than do laundry, but I shouldn't do that all the time - even though I can. But, if I keep making local decisions each time I go out to eat, I'll keep doing the same thing - going out to eat - until the laundry or bills or book piles reach epic proportions.

This may not be a problem for people who are "deciders", but I'm definitely a "get-stuck-in-a-rutter". So how can I overcome this, if I'm living with the inertia of my own decision making system? One way is to find some other reason to come home - for example, cooking dinner with my wife (normally not convenient as she eats early, while I'd normally be at work, and even if I did try to get home her dinner time traffic puts me an hour and a half from home; but we've set a time to do that from time to time) but she's out of town for business in New York, so I don't have her to help me.

So the way I've been experimenting with recently is treating myself. Over the weekend I made a large bowl of tabbouleh, one of my favorite foods, and pound cake, one of my favorite desserts. The next evening I grabbed a small plate of sushi from Whole Foods and made another dent into the tabbouleh. I had a commitment the next night, but the following night I stopped to get gas and found that a Whole Foods had opened near my house, and on the spur of the moment I decided to go in, get a ribeye steak, and cook myself another dinner, eating even more of the tabbouleh.

The tabbouleh itself is healthy, and maybe the sushi is too; the steak, not so much. Normally I wouldn't get another steak as I'd had a few recently, both homecooked and out at restaurants; but I wanted to overcome my decision making inertia. It would have been so easy to note the presence of the Whole Foods for later and go eat out; instead, I said explicitly to myself: you can have a steak if you eat in. And so I walked in to Whole Foods, walked out a couple minutes later with a very nice steak, and went home, quickly cooked a very nice dinner, and got some work done.

Normally I prefer to eat about one steak a month (or less), sticking to mostly fish as my protein source, but I'll let my red meat quota creep up a bit if it helps me establish the habit of cooking more meals at home. Once that habit's more established, I can work on making it healthier again. Already I know ways to do it: switch to buffalo, for example, which I prefer over beef steak anyway (and I'm not just saying that as a health food nut; after you've eaten buffalo long enough to appreciate the flavor you don't want to go back).

So far, tricking myself into doing the right thing has been a success. Now let's see if we can go a step further and just do the right thing on our own.

-the Centaur

Pictured: a ribeye steak, fresh fruit and mint garnish, tabbouleh in a bed of red leaf lettuce, and Gazzaniga et al.'s textbook on Cognitive Neuroscience.

Trying Again and Again is not Sisyphean

centaur 0
Loosely transcribed from a letter to a friend. Names have been variablized to protect the innocent: Dude, it's been over a year since you applied at The Search Engine That Starts With a G and since then you've created, all by yourself, a brand new, polished web site with no doubt N users and X,Y and Z impressive features. Time to update the resume and apply again? I know you're frustrated that this venture didn't make it, but successful entrepreneurs are ones that try, try and try again. During my time at The Search Engine That Started With an E, we were exposed to a variety of advisors who had started successful businesses. Most of these had started several, only a few of which caught off. The ones that did made them millionaires. My uncle B is the same way: he's worked on many businesses; many failed, the others did quite well. Come to think of it, when the dot-com bubble burst, the lead founder of The Search Engine That Started With an E didn't let its stumble stop him - he's started several other ventures since then. One of them will catch fire and make him a millionaire too. I also had another thought. Stay with me here. In the essay The Myth of Sisyphus, Albert Camus argues that just because the Greek hero Sisyphus is condemned for eternity to push a rock up a hill, only to watch it roll down again, that doesn't mean that his life is actually devoid of hope. Camus argues that even though Sisyphus's task is meaningless, and the moment the rock falls down is heartwrenching, he nonetheless can be happy because he's engaged in a constant struggle ... "and that struggle is enough to fill a man's heart." In the book How to Be an Existentialist, author Gary Cox expands on Camus' argument: to an existentialist, everyone's life can be considered to be meaningless, and it's the constant struggle to exercise our freedom itself that brings meaning to life. In other words, the struggle has intrinsic value, just to us, whether we succeed or not. But I am not an existentialist, and in the objective world we share, our tasks do not endlessly repeat. It does look like we live in a world where the rock will always roll back down - time and entropy conquer all - but sometimes the rocks stay at the top of the hill, a long, long time. Longer than the allotted time we have to push rocks up the hill, sometimes; sometimes the rock stays up, even when we're the ones that slip and fall away. It is, in short, possible to succeed. It's possible to build something that lasts ... but what if we don't? Well, even if we don't, I am still not an existentialist, and in the objective world we share, our burdens are not unique to ourselves. There are many other people pushing rocks, and it brings comfort to know others are struggling. There are many other hills - sometimes, they even look like the same hill - and it can ease other's paths to know which parts of the slope are better. That is, not only does the struggle have intrinsic value, above and beyond the possibility of leading to a reward, our reports about the struggle also has extrinsic value, value to others who are fighting the same struggle ourselves. Keeping our struggle to ourselves is noble; sharing it with others is valuable. Perhaps, even, something that could lead to a reward. What if ... I know it is too late for this for the work you did over the last year, but imagine ... what if you had a blog, and every week blogged about your experience finding and overcoming development / product / business challenges for Company X? Yes, I know there are millions of blogs, and yes, I know most of them are drek. But they're not what I'm talking about: I'm talking about your blog, your experiences, your wisdom. Imagine, if you'd been doing that from the ground up, talking about your experiences, passing on your wisdom, it might start to build a name that you could turn into a career. At the very least, it would be another point of reference for your resume. Seriously, I've learned from you about how to use technology X to design web sites and benefited from development platform Y that you pointed out to me. And I've been doing this for years. If I could learn from you, don't you think other people could to? Everything you're doing might be a building block in the next big thing. I know it's trite to say that many great companies have started in garages ... but how much copy has been written sharing those stories? How much have you benefited from learning how others have done things? How much can other people learn from you? How big can you think? -the Centaur

The Future Will Work

centaur 0


I've seen and heard a lot of craziness lately. It's making smart people say very stupid things. Look, I know these are trying times. Depression. Layoffs. Earthquakes. A Republican elected to Ted Kennedy's seat (just kidding). A Kenyan in the White House (even more kidding). Global warming hysteria / denialism. Cats and dogs living together: mass hysteria.

But be not afraid.

Humanity and the Earth have been through this before. Depression? We survived the Dark Ages. Layoffs? We've survived the collapse of industries and even civilizations. Earthquakes? We survived Pompeii and Krakatoa. Political shifts? God save the queen, we don't need her any more, and we even survived Communism. And global warming? Once the entire ocean became an algal bloom and almost everything alive died - and we're still here.

We can fix the atmosphere by taking measures that won't ruin the economy in case global warming is wrong and will start us on the path in case global warming is right. We can live with political changes and shifts and learn from the battle. And we can build a better world by recognizing that there are things wrong here and now that need fixing, and fixing them - while remembering human nature will always be with us.

Stop scaring yourself with imagined fears born from the latest crisis. Take a deep breath and look back through time. Look at all we've been through. Look at all the disasters that, too, have passed. And look at all we've accomplished. Sometimes it took great vision and immense amounts of hard work, but, praise God, he really does help those who help themselves.

The future will work. You can count on it. If you're willing to make it happen.

-the Centaur

The easiest way to ruin a poem

centaur 0
The easiest way to ruin a poem
is to read it like a poem
with stilted voice and stately oration
designed to show the poet's construction
- poetry, as read by "poets"
who learned in English class that
"poetry is the highest form of language."

I do not agree.

Poetry is distilled emotion,
concentrated essence of the darlings a novelist must murder,
packaged up with that punch that took Emily Dickinsons' head off.

Poems should be read
as if by Robert Frost's neighbor,
with sinewy hands moving rocks through the darkness,
springing forth to hurl them through our defensive walls:
the poet as savage.

Poetry should be many things:
inspiring, depressing,
comforting, enlightening,
homespun, heartwrenching.
It should never be safe.