Mt internet has been flakey, so I chatted with an AT-AT Druid online about it and they unexpectedly had a free repair tech slot the next morning. Send them? Yeah baby yeah!
After some kerfuffle with the confirmation, we got it scheduled and they showed up at 830 this morning only to find ...
The internet box half ripped off the house and the beginnings of what looked like a squirrel's nest in it.
Remember, folks, step one of network debugging is to check layer one of the stack: your physical equipment. "Your wires are loose" is the network equivalent of "Ain't got no gas in it" from Sling Blade.
So, hopefully, regular blogging will resume soon. Till then, enjoy this lovely blog post thumb-crafted on my phone.
Sometimes when I'm behind I shoot for a relatively minimal breakfast: a grapefruit or half pummelo, some toast, maybe some grits or vegan yogurt. I enjoy breakfast, even though I don't generally eat a full three meals a day: for some reason, since I've been out on my own, I've gravitated to two full meals (brunch and dinner) and the occasional midnight snack of milk and pound cake if I'm not too full.
But the "read and eat" ritual remains important, whether I do it two or three times a day. Unless I'm eating with others, or am in the middle of some absolute emergency, I always have a book with me when I eat --- to the point that I have a stand set up to read at the breakfast table. The current top-of-the-stack books are "Pattern Recognition and Machine Learning" for the late-night reads and "Unmasking Autism" for the daily reader (along with "GANs in Action" for a project at work, and various books for writing reference).
Even if your meals are quick and minimal, you can read a few paragraphs while you eat, and hopefully enjoy it. And, if you're persistent, you can get through enormous books this way ... like "A New Kind of Science" or "Machine Vision" or "Probability Theory: The Logic of Science", three long books that I ate, one bite at a time, mostly over breakfast and midnight snacks, a page or even a paragraph at a time, until, at long last, one more mountain was climbed.
-the Centaur
Pictured: Half a pummelo, two slices of toast, and "Unmasking Autism".
When you've got a lot to do, sometimes it's tempting to just "power through it" - for example, by extending a meeting time until all the agenda items are handled. But this is just another instance of what's called "hero programming" in the software world, and while sometimes it's necessary (say, the day of a launch) it isn't a sustainable long-term strategy, and will incur debts that you can't easily repay.
Case in point, for the Neurodiversiverse Anthology, my coeditor and I burned up our normally scheduled meeting discussing, um, scheduling with the broader Thinking Ink team, so we added a spot meeting to catch up. We finalized the author and artist contracts, we developed guidance for the acceptance and rejection letters, and did a whole bunch of other things. It felt very productive.
But, all in all, a one hour meeting became three and a half, and I ended up missing two scheduled meetings because of that. The meetings hadn't yet landed on the calendar - one because we were still discussing it via email, and the other because it was a standing meeting out of my control. But because our three and a half hour meeting extended over the time we were supposed to follow up and set the actual meeting time, we never set that time, and when I was playing catch up later that evening, I literally spaced on what day of the week it was, and didn't notice the other meeting had started until it was over.
All that's on me, of course - it's important to put stuff on the calendar as soon as possible, including standing meetings, even if the invite is only for you, and I have no-one else to blame for that broken link in the chain. And both I and my co-editor agreed to (and wanted to) keep "powering through it" so we didn't have to schedule a Saturday meeting. But, I wonder: did my co-editor also have cascading side effects due to this longer meeting? How was her schedule impacted by this?
Overall, this is an anthology, and book publishing has long and unexpectedly complex and tight schedules: if we don't push to get the editing done ASAP, we'll miss our August publishing window. But it's worth remembering that we need to be kind to ourselves and realistic about our capabilities, or we'll burn out and still miss our window.
That happened to me once in grad school - on what I recall was my first trip to the Bay Area, in fact. I hadn't gotten as much done on my previous internship, and started trying to "power through it" to get a lot done from the very first week, putting in super long hours. I started to burn out the very first weekend - I couldn't keep the pace. Nevertheless, I kept trying to push, and even took on new projects, like the first draft of the proposal for the Personal Pet (PEPE) robotic assistant project.
In one sense, that all worked out: my internship turned into a love of the Bay Area, where I lived for ~16 years of my life; the PEPE project led to another internship in Japan, to co-founding Enkia, to a job at Google, and ultimately to my new career in robotics.
But, in another sense, it didn't: I got RSI from a combination of typing every day for work, typing every night for the proposal, and blowing off steam from playing video games when done. I couldn't type for almost nine months, during the writing of my PhD thesis, which I could not stop at, and had to learn to write with my left hand. I was VERY lucky: I know some other people in grad school with permanent wrist damage.
"Powering through it" isn't sustainable, and while it can lead to short-term gains and open long-term doors, can lead to short-term gaffes and long-term (or even permanent) injuries. That's why it's super important to figure out how to succeed at what you're doing by working at a sustainable pace, so you can conserve your "powering through it" resources for the times when you're really in the clinch.
Because if you don't save your resources for when you need them, you can burn yourself out along the way, and still fail despite your hard work - perhaps walking away with a disability as a consolation prize.
-the Centaur
Pictured: Powering through taking a photograph doesn't work that well, does it?
I've learned a lot about neurodiversity in the past months - first, after having the crazy idea of launching yet another anthology, this one about neurodivergent people encountering aliens, and second, after coming to grips with my own neurodivergence (social anxiety disorder with perhaps a touch of undiagnosed autism). We want The Neurodiversiverse Anthology to land well with its intended audience, and need to get it right!
But it struck me that there's a lot of unhelpful cross-stereotyping between autistic folks and nerd and geek culture. Sure, there are autistic people who become intensely interested in "special topics", but sometimes that special topic is a sport or other "socially acceptable" activity, making it easier for autistic people to mask. And as Devon Price points out in her book Unmasking Autism, autistic people have specific bottom-up processing styles which are different from the top-down, "allistic" style of so-called "neurotypical" people. So just being obsessed with a special topic doesn't make you autistic, nor vice versa.
In fact, speaking as a proud member of "nerd" and "geek" culture, my social group had our own definitions of what "nerd" and "geek" meant, which indicated a difference in thinking styles, but didn't necessarily map to an actual neurodivergence. Geekdom in particular meant a certain kind of out-of-the-box thinking that doesn't align with what I read about the processing styles of autistic folks - not to say that these styles couldn't overlap, or even that they might frequently co-occur, but that "geek" had its own meaning.
That made me think back on conversations with a friend who was once called a "geek" by someone who meant it as an insult. HIs response? "Yes, I am - and you're not. Ha, ha, ha!" To him, it was a badge of honor, as it signified a deeper understanding of certain systems of the world and a different way of thinking - not neurodivergent, per se, but just different. We had a long conversation about different words and their nuances, and it led me to think about how these words have lurking meanings in my head.
So here's my attempt to unpack that terminology a little bit:
Nerds: A nerd is someone who has strong interests that someone else finds socially unacceptable. Calling someone a nerd says way, way more about the source than the target: it's a group identification play, designed to ostracize the person who's not into the currently approved interests. Now, to some folks, nerd can mean someone who is "socially awkward" - the stereotype is big glasses, pocket protectors, and high-pitched voices - but, really, that's just stereotyping, as judgmental people can and will ret-con someone into being a "nerd" as soon as they find out they're into something that isn't "cool."
Geeks: A geek is someone who uses out-of-the-box thinking to build up expertise in a given topic. Geeks can geek out about anything from computers to philosophy to football, just like their close cousins, "fans". But unlike "fans", a geek's expertise is weaponized. A great fictionalized example are the protagonists of the movie Moneyball, loosely based on a couple of real-life geeks who used their deep knowledge of baseball and statistics to turn around the Oakland A's. This is what my buddy meant when he said "Yes, I'm a geek, and you're not: ha ha ha!" - geekdom is something to be celebrated.
Wonks: A wonk is a geek about public policy. Al Gore is the quintessential wonk. Wonks tend to be paid lots of money to run very complicated systems in the public policy arena, though they don't tend to do quite as well when running for elections. Perhaps voters mistake them for nerds.
Cranks: A crank is a geek about a nonstandard scientific theory. Typically cranks are smart, well-educated people with a large body of perfectly normal beliefs, who become convinced of some off-the-wall theory that they've encountered in their broad reading or developed through their out-of-the-box thinking. Unfortunately for many scientists, cranks want to geek out with other science geeks about their theories, which can go badly when scientists try to explain all the ways their ideas don't work. I remember one fellow getting angry with me when I was trying to agree with him that his theory was possible - but had to point out that one of his claims was stated more strongly than the evidence supported. I wasn't even saying he was wrong, just that scientists need to be careful about their claims. The conversation did not go well.
Nutter: A nutter is a crank who has warped his view of reality to fit his nonstandard theory. For example, once a fellow attempted to cajole me into coming to work for his "company" where he was working on a "warp drive" (and no, I'm not joking). Now, I know a thing or two about the actual science behind so-called "warp drives", and this guy wasn't talking about his project in any way that convinced me he knew what he was talking about. I politely declined on the grounds that I was a very busy author and roboticist and preferred to spend my time bringing my own projects to fruition, and he proceeded to tell me how if I saw his plans for the flying saucer he was trying to build I'd abandon my own projects in favor of his. I did not.
Genius: A genius is a nutter who warps reality to fit his nonstandard theory. Fun fact: reality was classical before Einstein invented relativity, and light was just an electromagnetic field before Richard Feynman invented path integrals and showed that photons really go everywhere all at once. More seriously, a genius applies his out-of-the-box thinking at a very deep level, geeking out about all of reality. To some people, geniuses look like nutters ... and you never really do know which one you've got when a nervous looking man steps up to your front porch holding only a suitcase and says, "My brain is open." Turn him away, and you get nothing; take him in and help him tackle his questions, and you get an Erdős number.
So one point I'm trying to make here is that nerding out about something can take you places. Sometimes it takes you to a deep understanding of a subject matter, which sometimes makes people uncomfortable; sometimes that turns out to be very lucrative, and sometimes that turns out to be ostracizing. But, even then, sometimes the people we think are the nuttiest turn out to be the most brilliant people.
But another point I'm trying to make is that nothing about geeking out really has anything to do with neurodivergence - it's a pattern of behavior which occurs in neurodivergent and neurotypical people alike. Perhaps an autistic person might geek out about something, or perhaps they might not. Perhaps a geek might have autistic tendencies, or perhaps they might not. Perhaps some of these traits are often found together, or perhaps, even if that co-occurrence is actually real, it can distract us from looking sincerely at the unique and whole human beings we are interacting with, and collapsing these different ways of looking at people into a single all-encompassing category is unnecessary stereotyping.
Or, put another way, if you know one autistic person, you know one autistic person, and if you know one geek, you know one geek, and there's no guarantee that knowing one tells you much about the other.
Editors have superpowers, but you can't save everybody.
One of Ayn Rand's most useful distinctions for writers is between abstractions and the concretes that realize them. She's obviously not the only person to employ such a distinction, but if you think of abstractions as representations of a set of concretes, it helps you realize that you cannot portray pure abstractions like justice or injustice: you need to show the abstraction in concrete actions to communicate it. For example, the theme of your story may be "the mind on strike" but it must be realized using a set of concrete characters and events that (hopefully) illustrate that theme.
Once you've decided on an abstract theme, it can help you ruthlessly cull unnecessary concretes from your story, or to flesh the theme out to fit the concretes that you do have, or both. The same is true for editing anthologies, only with a little less flexibility as we don't completely control the submitted stories. For example, the Neurodiversiverse's theme is "neurodivergent folks encountering aliens", and if we get a story that does not feature neurodivergent folks, aliens, or encounters, we are not in the position of a writer who can tweak the themes or their realization until they both fit: we have to just reject off-topic stories.
But, as my coeditor and I like to say, editors have superpowers. There's more than one story in the anthology where we've been able to suggest edits - based on the theory of conflict, or the major dramatic question ("who wants what, why can't they get it, what do they do about it, and how does it turn out"), or even just line edits - that would resolve the problems in the story to the point that we'd go from a reject to an accept - or would resolve them, if the author goes along with the changes, that is.
But sometimes we can't even do that. There have been several stories where we applied our editing superpowers and drafted a way to fix the story to fit our theme - but where we, reluctantly, declined to pass on the story anyway, because we were no longer convinced that the edited story would be what the author intended. If a story was way off the anthology's theme, but the story's theme was really integral to the story's implementation, then changing the text to fit the anthology may not have suited the story.
In the end, despite our editorial superpowers, we can't "save" all stories, because not all stories NEED saving: some of them may not be right for this particular project ... and that's OK.
-the Centaur
Pictured: A nice heritage indoor mall in Asheville, which is a great writing town.
One of the most frustrating things reading the philosophy of Ayn Rand is her constant evasions of reality. Rand's determinedly objective approach is a bracing blast of fresh air in philosophy, but, often, as soon as someone raises potential limits to a rational approach - or, even, in the cases where she imagines some strawman might raise a potential limit - she denies the limit and launches unjustified ad-hominems.
It reminds me a lot of "conservative" opponents to general relativity - which, right there, should tell you something, as an actual political conservative should have no objections to a hundred-and-twenty year old well tested physical theory - who are upset because it introduces "relativism" into philosophy. Well, no, actually, Einstein considered calling relativity "invariant theory" because the deep guts of the theory actually are a quest for formulating theories in terms that are invariant between two observers, like the space-time interval ds^2, which is the same no matter how the relative observers are moving.
In Rand's case, she and Peikoff admit up front in several places that human reason is fallible and prone to error - but as soon as a specific issue is raised, they either deny that failure is possible or claim that critics are trying to destroy rationality. Among things they claim as infallible products of reason are notions such as existence, identity, and consciousness, deterministic causality, the infallibility of sense perception, the formation of concepts, reason (when properly conducted), and even Objectivism itself.
In reality, all of these things are fallible, and that's OK.
Our perception of what exists, what things are, and even aspects of our consciousness can be fooled, and that's OK, because a rational agent can construct scientific procedures and instruments to untangle the difference between our perception of our phenomenal experience and the nature of reality. Deterministic causality breaks down in our stochastic world, but we can build more solid probabilistic and quantum methods that enable us to make highly reliable predictions even in the face of a noisy world. Our senses can fail, but there is a rich library of error correcting methods both in natural systems and in in robotics that help us recover reliable information that is useful enough to act upon with confidence.
As for the Objectivist theory of concepts, it isn't a terrible normative theory of how we might want concepts to work in an ideal world, but it is a terrible theory of how concept formation actually works in the real world, either in the human animal or in how you'd build an engineering system to recognize concepts - Rand's notion of "non-contradictory identification" would in reality fail to give any coherent output in a world of noisy input sensors, and systems like Rand's ideas were supplanted by techniques such as support vector machines long before we got neural networks.
And according to Godel's theorem and related results, reasoning itself must either be incomplete or inconsistent - and evidence of human inconsistency abounds in the cognitive science literature. But errors in reasoning itself can be handled by Pollock's notion of "defeasible" reasoning or Minsky's notion of "commonsense" reasoning, and as for Objectivism itself being something that Rand got infallibly right ... well, we just showed how well that worked out.
Accepting the limits of rationality that we have discovered in reality is not an attack on rationality itself, for we have found ways to work around those limits to produce methods for reaching reliable conclusions. And that's what's so frustrating reading Rand and Peikoff - their attacks on strawmen weaken their arguments, rather than strengthening them, by both denying reality and denying themselves access to the tools we have developed over the centuries to help us cope with reality.
"There's no individualist so rugged they were born being able to change their own diaper." That's a quote from a story in progress that I thought was good enough to hoist up into the blog, just in case it turns into a "darling" and I need to cut it. The point is not to be against individualism - our world is better if most people are capable of pulling their own weight most of the time - but that none of us, literally none of us, are truly autogenic: self-made men who pulled themselves up by their own bootstraps.
You cannot fake reality in any way whatsoever: No matter how rugged an individualist is, no matter how much a person has made with how little, there was a point in their life where they could not clothe themselves, feed themselves, or change their own diaper. And yet we've cultivated a mythos in this country that deifies the self-made individual to the point where it has become fetishized - and signaled through purchases and action, as in the residential construction worker who purchased that huge truck, parked it on our grass in the rain, and proceeded to rut up our lawn and track our driveway with mud on the way out. Not even the neighbors doing that construction want this to happen - but it keeps happening, as this patch of our driveway is just out of sight from the office where I work, and we don't often catch them.
In contrast, we have no problem working with our neighbors across the street. When a package was mis-delivered due to a missed digit, I could have kept it, or mailed it back (to Ohio!) with the note "No Such Person At This Address". But I took a few minutes to find them by phone printed on the pacakge, and we quickly worked out that they were a short walk away. On the way out the next day, I dropped the package off, hidden slightly behind their porch columns so it wasn't visible from the road. Working together, we made sure they got their package quickly without it having to be shipped halfway across the country.
I'm all in favor of individualism, even the rugged kind. But we shouldn't fetishize it to the point that we run roughshod over each other to the point that we pretend that other people aren't there or don't matter - we should work together to make sure we have the best world possible.
-the Centaur
Pictured: a construction truck, for which our responsible neighbor apologized - yet once every week or two, the construction trucks creep back onto our land when they think I'm not looking, leading to torn up grass as in the second picture; also pictured, the package I left for our neighbor, rather than shipping it back.
I sure do love color, but I suck at recognizing it - at least in the same way that your average person does. I'm partially colorblind - and I have to be quick to specify "partial", because otherwise people immediately ask if I can't tell red from green (I can, just not as good as you) or can't see colors at all.
In fact, sometimes I prefer to say "my color perception is deficient" or, even more specifically, "I have a reduced ability to discriminate colors." The actual reality is a little more nuanced: while there are colors I can't distinguish well, my primary deficit is not being able to NOTICE certain color distinctions - certain things just look the same to me - but once the distinctions are pointed out, I can often reliably see it.
This is a whole nother topic on its own, but, the gist is, I have three color detectors in my eyes, just like a person with typical color vision. Just, one of those detectors - I go back and forth between guessing it's the red one or the green one - is a little bit off compared to a typical person's. As one colleague at Google put it, "you have a color space just like anyone else, just your axes are tilted compared to the norm."
The way this plays out is that some color concepts are hard for me to name - I don't want to apply a label to them, perhaps because I'm not consistently seeing people use the same name for those colors. There's one particular nameless color, a particularly blah blend of green and red, that makes me think if there were more people like me, we'd call it "gred" or "reen" the way typical people have a name for "purple".
Another example: there's a particular shade of grey - right around 50% grey - that I see as a kind of army green, again, because one of my detectors is resonating more with the green in the grey. If the world were filled with people like me, we'd have to develop a different set of reference colors.
I'm not so certain I'd go the second route. Speaking as someone who's been formally diagnosed "chromodivergent" (partially red-green colorblind) and is probably carrying around undiagnosed "neurodivergence" (social anxiety disorder with possibly a touch of "adult autism"), I think there's some value to recognizing some degree of "typicality" and "norms" to help us understand conditions.
If you had a society populated with people with color axes like me and another society populated with "chromotypical" receptors, both societies would get on fine, both with each other and the world; you'd just have to be careful to use the right set of color swatches when decorating a room. But a person with a larger chromodivergence - say, someone who was wholly red-green colorblind - might have be less adaptive than a chromotypical person - say, because they couldn't tell when fruit was ripe.
Nevertheless, even if some chromodivergences or neurodivergences might be maladaptive in a non-civilized environment, prioritizing the "typical" can still lead to discrimination and ableism. For those who don't understand "ableism", it's a discriminatory behavior where "typical" people de-personalize people with "disabilities" and decide to make exclusionary decisions for them without consulting them.
There are great artists who are colorblind - for example, Howard Chaykin. There's no need to discourage people who are colorblind from becoming artists, or to prevent them from trying: they can figure out how to handle that on their own, hiring a colorist or specializing in black-and-white art if they need to.
All you need to do is to decide whether you like their art.
-the Centaur
Pictured: some colorful stuff from my evening research / writing / art run.
As both Ayn Rand and Noam Chomsky have both said in slightly different ways, concepts and language are primarily tools of thought, not communication. But cognitive science has demonstrated that our access to the contents of our thought are actually relatively poor - we often have an image of what is in our head which is markedly different from the reality, as in the case where we're convinced we remember a friend's phone number but actually have it wrong, or have forgotten it completely.
One of the great things about writing is that it forces you to turn these abstract ideas about our ideas into concrete realizations - that is, you may think you know what you think, but even if you think about it a lot, you don't really know the difference between your internal mental judgments about your thoughts and their actual reality. The perfect example is a mathematical proof: you may think you've proved a theorem, but until you write it down and check your work, there's no guarantee that you actually HAVE a proof.
So my recent article on problems with Ayn Rand's philosophy is a good example. I stand by it completely, but I think that many of my points could be refined considerably. I view Ayn Rand's work with regards to philosophy the way that I do Euclid for mathematics or Newton for physics: it's not an accurate model of the world, but it is a stage in our understanding of the world which we need to go through, and which remains profitable even once we go on to more advanced models like non-Euclidean geometry or general relativity. Entire books are written on Newtonian approximations to relativity, and one useful mathematical tool is a "Lie algebra", which enables us to examine even esoteric mathematical objects by looking at the locally at the Euclidean tangent space generated around a particular point.
So it's important to not throw the baby out with the bathwater with regards to Ayn Rand, and to be carefully specific about where her ideas work and where they fail. For example, there are many, many problems with her approach to the law of identity - the conceptual idea that things are what they are, or A is A - but the basic idea is sound. One would say that it almost approaches tautological except for the fact that many people seem to ignore it. However, you cannot fake reality in any way whatever - and you cannot make physical extrapolations about reality through philosophical analysis of a conceptual entity like identity.
Narrowing in on a super specific example, Rand tries to derive the law of causality from the law of identity - and it works well, right up unto the point where she tries to draw conclusions about it. Her argument goes like this: every existent has a unique nature due to the law of identity: A is A, or things are what they are, or a given existent has a specific nature. What happens to an existent over time - the action of that entity - is THE action of THAT entity, and is therefore determined by the nature of that entity. So far, so good.
But then Rand and Peikoff go off the rails: "In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity." It is difficult to grasp the level of evasion which might produce such a confusion of ideas: to make such a statement, one must throw out not just the tools of physics, mathematics and philosophy, but also personal experience with objects as simple as dice.
First, the evasion of personal experience, and how it plays out through mathematics and physics. Our world is filled with entities which may produce one action out of many - not just entities like dice, but even from Rand and Peikoff's own examples, a rattle makes a different sound every time you rattle it. We have developed an entire mathematical formalism to help understand the behavior of such entities: we call them stochastic and treat them with the tools of probability. As our understanding has grown, physicists have found that this stochastic nature is fundamental to the nature of reality: the rules of quantum mechanics essentially say that EVERY action of an entity is drawn from a probability distribution, but for most macroscopic actions this probabilistic nature gets washed out.
Next, the evasion of validated philosophical methods. Now, one might imagine Rand and Peikoff saying, "well, the roll of the dice is only apparently stochastic: in actuality, the dice when you throw it is in a given state, which determines the single action that it will take." But this is a projective hypothesis about reality: it is taking a set of concepts, determining their implications, and then stating how we expect those implications to play out in reality. Reality, however, is not required to oblige us. This form of philosophical thinking goes back to the Greeks: the notion that if you begin with true premises and proceed through true inference rules, you will end up with a true conclusion. But this kind of philosophical thinking is invalid - does not work in reality - because any one of these elements - your concepts, your inference rules, or your mapping between conclusions and states - may be specious: appearing to be true without actually reflecting the nuance of reality. To fix this problem, the major achievement of the scientific method is to replace "if you reach a contradiction, check your premises" with "if you reach a conclusion, check your work" - or, in the words of Richard Feynman, "The sole test of any idea is experiment."
Let's get really concrete about this. Rand and Peikoff argue "If, under the same circumstances, several actions were possible - e.g., a balloon could rise or fall (or start to emit music like a radio, or turn into a pumpkin), everything else remaining the same - such incompatible outcomes would have to derive from incompatible (contradictory) aspects of the entity's nature." This statement is wrong on at least two levels, physical and philosophical - and much of the load-bearing work is in the suspicious final dash.
First, physical: we actually do indeed live in a world where several actions are possible for an entity - this is one of the basic premises of quantum mechanics, which is one of the most well-tested scientific theories in history. For each entity in a given state, a set of actions are possible, governed by a probability amplitude over those states: when the entity interacts with another entity in a destructive way the probability amplitude collapses into a probability distribution over the actions, one of which is "observed". In Rand's example, the balloon's probability amplitude for rising is high, falling is small, emitting radio sounds is still smaller, and turning into a pumpkin is near zero (due to the vast violation of conservation of mass).
If one accepts this basic physical fact about our world - that entities that are not observed exist in a superposition of states governed by probability amplitudes, and that observations involve probabilistically selecting a next state from the resulting distribution - one can create amazing technological instruments and extraordinary scientific predictions - lasers and integrated circuits and quantum tunneling and prediction of physical variables with a precision of twelve orders of magnitude - a little bit like measuring the distance between New York and Los Angeles with an error less than a thousandth of an inch.
But Rand's statement is also philosophically wrong, and it gets clearer if we take out that distracting example: "If, under the same circumstances, several actions were possible, such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." What's wrong with this? There's no warrant to this argument. A warrant is the thing that connects the links in a reasoning chain - an inference rule in a formal system, or a more detailed explanation of the reasoning step in question.
But there is no warrant possible in this case, only a false lurking premise. The erroneous statement is that "such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." Why? Why can't an entity's nature be to emit one of a set of possible actions, as in a tossed coin or a die? Answer: Blank out. There is no good answer to this question, because there are ready counterexamples from human experience, which we have processed through mathematics, and ultimately determined through the tools of science that, yes, it is the nature of every entity to produce one of a set of possible outcomes, based on a probability distribution, which itself is completely lawlike and based entirely on the entity's nature.
You cannot fake reality any way whatever: this IS the nature of entities, to produce one of a set of actions. This is not a statement that they are "contradictory" in any way: this is how they behave. This is not a statement that they are "uncaused" in any way: the probability amplitude must be non-zero in a space in order for an action to be observed, and it is a real physical entity with energy content, not merely a mathematical convenience, that leads to the observation. And it's very likely not sweeping under the rug some hidden mechanism that actually causes it: while the jury is still out on whether quantum mechanics is a final view of reality, we do know due to Bell's theorem that there are no "hidden variables" behind the curtain (a theorem that had been experimentally validated as of the time of Peikoff's book).
So reality is stochastic. What's wrong with that? Imagine a correct version of Ayn Rand's earlier statement: "In any given set of circumstances, therefore, there is only one type of behavior possible for an entity, the behavior expressive of its entity. This behavior may result in one of several outcomes, as in the rolling of a die, but the probability distribution over those set of outcomes is the distribution that is caused and necessitated by the entity's nature." Why didn't Peikoff and Rand write something like that?
We have a hint in the next few paragraphs: "Cause and effect, therefore, is a universal law of reality. Every action has a cause (the cause is the nature of the entity that acts); and the same cause leads to the same effect (the same entity, under the same circumstances, will perform the same action). The above is not to be taken as a proof of the law of cause and effect. I have merely made explicit what is known implicitly in the perceptual grasp of reality." That sounds great ... but let's run the chain backwards, shall we?
"We know implicitly in the perceptual grasp of reality a law which we might explicitly call cause and effect. We cannot prove this law, but we can state that the same entity in the same circumstances will perform the same action - that is, the same cause leads to the same effect. Causes are the nature of the entities that act, and every action has a cause. Therefore, cause and effect is a universal law of reality."
I hope you can see what's wrong with this, but if you don't, I'm agonna tell you, because I don't believe in the Socratic method as a teaching tool. First and foremost, our perceptual grasp of reality is very shaky: massive amounts of research in cognitive science reveal a nearly endless list of biases and errors, and the history of physics has been one of replacing erroneous perceptions with better laws of reality. One CANNOT go directly from the implicit knowledge of perceptual reality to any actual laws, much less universal ones: we need experiment and the tools of physics and cognitive science to do that.
But even from a Randian perspective this is wrong, because it is an argument from the primacy of consciousness. One of the fundamental principles of Objectivist philosophy is the primacy of existence over consciousness: the notion that thinking a thing does not make it so. Now, this is worth a takedown of its own - it is attempting to draw an empirically verifiable physical conclusion from a conceptual philosophical argument, which is invalid - but, more or less, I think Rand is basically right that existence is primary over consciousness. Yet above, Rand and Peikoff purport to derive a universal law from perceptual intuition. They may try to call it "implicit knowledge" but perception literally doesn't work that way.
If they admit physics into their understanding of the law of causality, they have to admit you cannot directly go from a conceptual analysis of the axioms to universally valid laws, but must subject all their so-called philosophical arguments to empirical validation. But that is precisely what you have to do if you are working in ontology or epistemology: you MUST learn the relevant physics and cognitive science before you attempt to philosophize, or you end up pretending to invent universal laws that are directly contradicted by human experience.
Put another way, whether you're building a bridge or a philosophy, you can't fake reality in any way whatsoever, or, sooner or later, the whole thing will come falling down.
"If you do what you've always done, you'll get what you've always gotten," or so the saying goes.
That isn't always true - ask my wife what it's like for a paint company to silently change the formula on a product right when she's in the middle of a complicated faux finish that depended on the old formulas chemical properties - but there's a lot of wisdom to it.
It's also true that it's work to decide. When a buddy of mine and I finished 24 Hour Comic Day one year and were heading to breakfast, he said, "I don't want to go anyplace new or try anything new, because I have no brains left. I want to go to a Dennys and order something that I know will be good, so I don't have to think about it."
But as we age, we increasingly rely on past decisions - so-called crystallized intelligence, an increasingly vast but increasingly rigid collection of wisdom. If we don't want to get frozen, we need to continue exercising the muscle of trying things that are new.
At one of my favorite restaurants, I round-robin through the same set of menu items. But this time, I ildy flipped the menu over to the back page I never visit and saw a burrito plate whose fillings were simmered in beer. I mean, what! And the server claimed it was one of the best things on the menu, a fact I can confirm.
It can be scary to step outside our circle. But if you do what you've always done, you'll miss out on opportunities to find your new favorite.
I've recently been having fun with a new set of "bone conduction" headphones, walking around the nearby forest while listening to books on tape [er, CD, er, MP3, er, streaming via Audible]. Today's selection was from Leonard Peikoff's Objectivism: The Philosophy of Ayn Rand. Listening to the precision with which they define concepts is wonderful - it's no secret that I think Ayn Rand is one of the most important philosophers that ever lived - but at the same time they have some really disturbing blind spots.
And I don't mean in the political sense in which many people find strawman versions of Rand's conclusions personally repellent, and therefore reject her whole philosophy without understanding the good parts. No, I mean that, unfortunately, Ayn Rand and Leonard Peikoff frequently make specious arguments - arguments that on the surface appear logical, but which actually lack warrants for their conclusions. Many of these seem to be tied to a desire to appear objective emotionally by demanding an indefensibly precise base for their arguments, rather than standing the more solid ground of accurate, if fuzzier concepts, which actually exist in a broader set of structures which are more objective than their naive pseudo-objective counterparts.
Take the notion that "existence exists". Peikoff explains the foundation of Ayn Rand's philosophy to be the Randian axioms: existence, identity, and consciousness - that is, there is a world, things are what they are, and we're aware of them. I think Rand's take on these axioms is so important that I use her words to label two them in my transaxiomatic catalog of axioms: EE, "existence exists," AA, "A is A", and CC, where Rand doesn't have a catchy phrase, but let's say "creatures are conscious". Whether these are "true", in their view, is less important than that they are validated as soon as you reach the level of having a debate: if someone disagrees with you about the validity of the axioms, there's no meaningful doubt that you and they exist, that you're both aware of the axioms, and that they have a nature which is being disputed.
Except ... hang on a bit. To make that very argument, Peikoff presents a condensed dialog between the defender of the axioms, A, and a denier of the axioms, B, quickly coming to the conclusion that someone who exists, is aware of your opinions, and is disagreeing with their nature specifically by denying that things exist, that people are aware of anything, and that things have a specific nature is ... probably someone you shouldn't spend your time arguing with. At the very best, they're trapped in a logical error; at the worst, they're either literally delusional or arguing in bad faith. That all sounds good. But A and B don't exist.
More properly, the arguing parties A and B only exist as hypothetical characters in Peikoff's made-up dialog. And here's where the entire edifice of language-based philosophy starts to break down: what is existence, really? Peikoff argues you cannot define existence in terms of other things, but can only do so ostensively, by pointing to examples - but this is not how language works, either in day-to-day life or in philosophy, which is why science has abandoned language in favor of mathematical modeling. If you're intellectually honest, you should agree that Ayn Rand and Leonard Peikoff exist in a way that A and B in Peikoff's argument do not.
Think about me in relationship to Sherlock Holmes. I exist in a way that Sherlock Holmes does not. I also exist in a way which Arthur Conan Doyle does not. Sherlock Holmes himself exists in a way that an alternate version of Holmes from a hypothetical unproduced TV show does not, and I as a real concrete typing these words exists in a way that the generic idea of me does not. One could imagine an entire hierarchy of degrees of existence, from absolute nothingness of the absence of a thing or concept, to contradictions in terms that could be named but do not exist, to hypothetical versions of Sherlock Holmes that do not exist, to Sherlock Holmes, who only exists as a character, to Arthur Conan Doyle who once existed, to me who existed as of this writing, to the concrete me writing this now, to existence itself, which exists whether I do or not.
Existence is what Marvin Minsky calls a "suitcase word": it's a stand in for a wide variety of other distinct but usefully similar concepts, from conceptual entities to physical existents to co-occurring physical objects in the same interacting region of space-time. And it's no good attempting to fall back on the idea that Ayn Rand was actually trying to define "existence" as the sum total of "existents" because pinning down "existence" or "existent" outside of an ostensible "I can point at it" definition is precisely what Rand and Peikoff don't want to do - first off, because they really do mean it to be "everything", in almost the precise same way that Carl Sagan uses the word "Cosmos" to refer to everything that ever is, was, or will be, and secondly, because if it loses its function as a suitcase word, it is no longer useful in their arguments.
In reality, if you say "existence exists", and someone attempts to contradict you, it does you no good to say "well, you're contradicting yourself, because you had to exist to even say that". You do need to actually put your money where your mouth is and say what concrete propositions you intend to draw from the terms "existence" and "exists" and the floating abstraction "existence exists" - and so do they. If you can't do this, you're not actually arguing with them; you're talking past them; if they can't do this, they're at best not arguing coherently, and at worst not arguing in good faith. If you both DO this, however, you may come to profitable conclusions, such as, "yes, we agree that SOMETHING exists, at least to the level where we had this debate; but we can also agree that the word existence should not extend to this unwanted implication."
This approach - reinforcing your axioms with sets of elaborations, models and even propositions that are examples of of the axioms, along with similar sets that should be considered counterexamples - is what I call the "transaxiomatic" approach. Rather than simply assuming the axioms are unassailable and attempting to pseudo-define their terms by literally waving one's hand around and saying "this is what I mean by existence" - and simply hoping people will "get it" - we need to reinforce the ostensible concretes we use to define the axioms with more carefully refined abstractions that tell us what we mean when we use the terms in the axioms, and what propositions we hope people should derive from it.
This is part of an overall move from the philosophical way of tackling problems towards a more scientific one. And it's why I think Ayn Rand was, in a sense, too early, and too late. She's too early in the sense that many of the things that she studied philosophically - ontology and epistemology - are no longer properly the domain of philosophy, but have been supplanted - firmly supplanted - by findings from science - ontology is largely subsumed into physics and cosmology, and epistemology is largely subsumed into cognitive science and artificial intelligence. That's not to say that that philosophy is done with those areas, but instead that philosophy has definitively lost its primary position within them: one must first learn the science of what is known in those areas before trying to philosophize about it. One cannot meaningfully say anything at all about epistemology without understanding computational learning theory. And she's too late in that she was trying to DO philosophy at a point in time where her subject matter was already starting to become science. Introduction to Objectivist Epistemology is an interesting book, but it was written a decade after "The Magical Number Seven, Plus or Minus Two" and two decades before the "Probably Approximately Correct" theory of learning, and you will learn much more about epistemology by looking up the "No Free Lunch" learning theorems and pulling on that thread than by anything Ayn Rand ever wrote (or, try reading "Probability Theory: The Logic of Science" for a good one-volume starting point). Which is not to say that Ayn Rand's philosophizing is not valuable - it is almost transcendently valuable - but if she was writing today, many of the more conceptually problematic structures of her philosophy could simply be dropped in favor of references to the rich conceptual resources of cognitive science and probability theory, and then she could have gotten on with convincing people that you can indeed derive "ought" from "is".
Or, maybe, just maybe, she might have done science in addition to philosophy, and perhaps even had something scientific to contribute to the great thread rolling forward from Bayes and Boole.
Existence does exist. But before you agree, ask, "What do you really mean by that?"
As a writer, it's important to have humility - no matter how enthusiastic you are about your work, there's no guarantee that it will land the way that you want it to with your readers. So I share my stories with "beta readers" who are, presumably, the kind of people who like to read what I want to write, and I use comments from beta readers to help me edit my stories before submitting them to editors or publishers.
I used to ask almost no questions of the beta readers BEFORE they read it, as I neither wanted to prejudice them about the story nor wanted to draw their attention to features that they might not have noticed. But, over time, I have started adding questions - perhaps in part because my research in social robot navigation exposed me to better ways to ask questions of people, and perhaps just through my own experience.
I settled on the following questions that I ask beta readers:
Is this the kind of story you like to read?
What did you like about it?
How could it be improved?
Would you like to read more stories in the same universe?
Is there anything that could be clarified to make it stand better alone?
Are there any questions that it raised that you'd love to see answered in another story?
The first three I think are generic to all stories, and are the ones that I started with:
First, if your story isn't the kind of story that your reader wants to read, their comments might not be about your story per se, but may actually be a subconscious critique of its genre, which can be actively misleading if you try to apply them to a story in that genre. I found this out the hard way when I gave The Clockwork Time Machine to someone who didn't like steampunk - many of their comments were just dissing the entire genre, and were useless for figuring out how to improve my particular story.
Second, it's important to know what people like about a story, so that you don't accidentally break those things in your edits. If one person dislikes something, but two others like it, you might be better off leaving that alone or gently tweaking it rather than just taking it out.
Third, no matter how big your ego is, you cannot see all the things that might be wrong with your story. (Unless you've won the Nobel Prize in literature or are a New York Times bestselling author, in which case, I especially mean you, because you've probably become uneditable). Fresh eyes can help you see what's wrong and where you could make it better.
But these questions weren't enough for someone who writes series fiction: my stories refer to a lot of background information, and set up ideas for other stories, yet should stand alone as individual stories:
Do you have a good vehicle? Have you set up a framework for telling stories that people are interested in? This goes beyond whether an individual story is satisfying, and to whether the setting and storytelling method itself are interesting.
Does your story stand alone? Are you pulling in backstory which is not adequately explained? This is information that should either be taken out, or woven into the story so it is load-bearing.
Does your story pull people in? Even if the story stands alone, you want it to either hint at questions to be answered in other stories or to answer questions from previous stories.
So far, these questions have worked well for me and my science fiction serial stories. Your mileage may vary, but I think that if you avoid asking anything specific about your story, and focus on the general functions that your story should fulfill, then you can get a lot of profit by asking beta readers ahead of the read.
-the Centaur
Pictured: A gryphon made of books in a store window in Asheville.
"I'ma gonna get that bird in the mirror, I swear, this is my territory, I'll show him---BONK!" "Okay, this time for sure---BONK!" "Tenth time's the charm---BONK!"
Not even putting up a screen in front of the mirror has helped; our little friend just hopped down onto the stairs of the cat condo (that "table" is a cat condo with a re-purposed glass tabletop, to give one of our now-passed older cats a place to sit and see the stars while shielding him from the rain) and started bonking the lower section of the mirror.
There's no reasoning with some people.
-the Centaur
P.S. Yes, I am making a direct comparison of people whose political beliefs are built around their persecution by imaginary enemies to a bird not smart enough to recognize his own reflection, why?
We got a LOT of submissions for the Neurodiversiverse. Many were actually on topic! Some, however, despite being well written, were not. And we really want this anthology to follow its theme of empowering stories of neurodivergent people encountering mentally diverse aliens, so we're focusing on that - and already have several strong stories that we know where we want to place in the story sequence.
Cat, when it's raining: "Let me out! Let me out! But not this door, it's wet. Let's try another door. And another! Or another! I gotta get out! Just hold the door open until the rain stops!"
Also cat, when it is nice and sunny: "Who cares about going outside? Ima gonna havva nap."
-the Centaur
Pictured: the cat-shaped void, Loki, actually using his void-colored cat tree for once. Image taken in infrared bands and color enhanced by NASA to show surface detail.
I'm not confident about my ability to predict the future, but some things I can see coming. When people started moving towards using streaming services, I said it was only a matter of time until a large chunk of people lost the libraries that they paid for due to mergers and acquisitions - and it's started happening with Playstation owners losing chunks of their libraries. This is only going to get worse, as with streaming you don't "own" anything - you're just paying for the illusion that you'll be able to access the content you want.
And next, after Paramount canceled Star Trek: Discovery and booted Star Trek: Prodigy off their network and shuffled off the movies, I predicted Paramount would lose Star Trek altogether before I'd even watched all of the Star Trek in my subscription (which is why I got Paramount Plus, or whatever it's called this week). And, while I can't predict the future, this too is also being openly discussed.
Streaming as we know it isn't going away - it's too convenient for too many people. But it's also going to collapse as we know it, and things will appear to get worse before they get better. Overall, we may come out the other side with a stronger set of shows: there's a period of time I used to think of as "the dark age of sci-fi television" when Enterprsise was struggling, Babylon 5 was canceled and you'd be hard pressed to find Andromeda on the airwaves; but the same period produced Battlestar and Firefly.
So don't give up hope, but don't think we'll avoid tectonic shifts.
One of the problems with computing is when it just gets ... foggy. Not when you're trying to do something hard, or when two pieces of software are incompatible, no. When things just sort of kind of don't work, and there are no known reasons that it's happening, and no reliable actions you can take to fix it.
Once this happened to me when I was working on a robotics device driver, and I realized the lidar itself was unreliable, so the only way to fix problems was to run each configuration ten times and keep average stats. Broken "worked" around ten percent of the time, whereas "fixed" worked around seventy percent of the time (approaching the rate at which the manufacturer's own software could connect to its own hardware).
Today, I ran into a seemingly simple problem with Anaconda, a Python package / environment management system. Conda lets you corral Python and other software into "environments" with different configurations so that potentially incompatible versions can be used on the same computer (albeit, not at the same time). It even gives you a handy indication about which environment is in use in your command prompt, like so:
There's a seemingly innocent blank line between (ThatEnvironment) and the previous line, yes? Not part of the standard Conda setup, but you can easily add it with a single line of configuration, changing the "env_prompt" to include an extra newline "\n" before printing the environment, like so:
Yeah, that line at the end. "env_prompt: \n({default_env})". In a conda configuration - a .condarc, or "dot condarc" file - which is almost as simple as possible. I don't even think the "channels" bit is needed - I didn't recall writing it, I think it just got added automatically by Conda. So this is almost the simplest possible change that you could make to your Conda configuration, done in almost the simplest possible way.
Except. It. Didn't. Take.
No matter what changes I made to the .condarc file, they didn't affect my Conda configuration. Why? I don't know. No matter what I did, nothing happened. I changed the prompt to all sorts of weird things to try to see if maybe my syntax was wrong, no dice. No amount of searching through manuals or documentation or Stack Overflow helped. I re-ran conda config, re-loaded my shell, rebooted my Ubuntu instance - nothing.
Finally, almost in desperation, I went back to my original version, and tried creating system-wide, then environment-specific configurations - and then the changes to the prompt started working. Thank goodness, I thought, and rebooted one more time, convinced I had solved the problem.
Except. It. Took. The. Wrong. Config.
Remember how I said I created a weird version just to see that it was working? Conda started reverting to that file and using it, even though it was several versions ago. It actively started overwriting my changes - and ignoring the changes in the environment-specific configurations.
So, I blew away all the versions of the file - local, system and environment-specific - and re-created it, in its original location, and then it started to work right. In the end, what was the final solution?
I have no idea.
When I started working on the problem, I wanted Conda to do a thing - print an extra blank line so I could more easily see a command and its result, separate from the next command and result. And so I created a file in the recommended place with a line containing the recommended magic words ... and it didn't work. Then I hacked on it for a while, it sort of started working, and I backed out my changes, creating a file in the same recommended place with a line containing the same recommended magic words ... and it did work.
Why? Who knows! Will it keep working? Who knows! If it breaks again, how do I fix it? Who knows!
This is what I call "the fog". And it's the worst place to be when working on computers.
... it's still one of the worst feelings in the world to turn back the sheets at the end of a long day, only to realize you hadn't blogged or posted your drawing. I had a good excuse yesterday - my wife and I were actually out at a coffeehouse, working on our art, when we had a sudden emergency and had to go home.
I had just finished my drawing and was about to snapshot it so I could post it, but instead threw the notebook into my bookbag, packed it up, and drove us home. Disaster was averted, fortunately, but the rest of the day was go-go-go, until finally, exhausted, I went to turn in and then went ... oh, shit. I didn't blog.
Fortunately, I didn't have to go back to the drawing board. But it did flip over to tomorrow while I was posting ... so, next day's post, here we come.
-the Centaur
Pictured: A jerky shot of me trying to document my wife's computer setup for reference.