Press "Enter" to skip to content

Posts tagged as “Blogging Every Day”

[twenty twenty-four day thirty-four]: chromodivergent and chromotypical

centaur 0

I sure do love color, but I suck at recognizing it - at least in the same way that your average person does. I'm partially colorblind - and I have to be quick to specify "partial", because otherwise people immediately ask if I can't tell red from green (I can, just not as good as you) or can't see colors at all.

In fact, sometimes I prefer to say "my color perception is deficient" or, even more specifically, "I have a reduced ability to discriminate colors." The actual reality is a little more nuanced: while there are colors I can't distinguish well, my primary deficit is not being able to NOTICE certain color distinctions - certain things just look the same to me - but once the distinctions are pointed out, I can often reliably see it.

This is a whole nother topic on its own, but, the gist is, I have three color detectors in my eyes, just like a person with typical color vision. Just, one of those detectors - I go back and forth between guessing it's the red one or the green one - is a little bit off compared to a typical person's. As one colleague at Google put it, "you have a color space just like anyone else, just your axes are tilted compared to the norm."

The way this plays out is that some color concepts are hard for me to name - I don't want to apply a label to them, perhaps because I'm not consistently seeing people use the same name for those colors. There's one particular nameless color, a particularly blah blend of green and red, that makes me think if there were more people like me, we'd call it "gred" or "reen" the way typical people have a name for "purple".

Another example: there's a particular shade of grey - right around 50% grey - that I see as a kind of army green, again, because one of my detectors is resonating more with the green in the grey. If the world were filled with people like me, we'd have to develop a different set of reference colors.

SO, this made me think that, in parallel to the concepts of "neurotypical and neurodivergent", we could use concepts like "chromotypical and chromodivergent". Apparently I'm not the only one who thinks this: here's an artist who argues that "colorblind" can be discouraging to artists, and other people think we should drop the typical in neurotypical as it too can be privileging to certain neurotypes.

I'm not so certain I'd go the second route. Speaking as someone who's been formally diagnosed "chromodivergent" (partially red-green colorblind) and is probably carrying around undiagnosed "neurodivergence" (social anxiety disorder with possibly a touch of "adult autism"), I think there's some value to recognizing some degree of "typicality" and "norms" to help us understand conditions.

If you had a society populated with people with color axes like me and another society populated with "chromotypical" receptors, both societies would get on fine, both with each other and the world; you'd just have to be careful to use the right set of color swatches when decorating a room. But a person with a larger chromodivergence - say, someone who was wholly red-green colorblind - might have be less adaptive than a chromotypical person - say, because they couldn't tell when fruit was ripe.

Nevertheless, even if some chromodivergences or neurodivergences might be maladaptive in a non-civilized environment, prioritizing the "typical" can still lead to discrimination and ableism. For those who don't understand "ableism", it's a discriminatory behavior where "typical" people de-personalize people with "disabilities" and decide to make exclusionary decisions for them without consulting them.

There are great artists who are colorblind - for example, Howard Chaykin. There's no need to discourage people who are colorblind from becoming artists, or to prevent them from trying: they can figure out how to handle that on their own, hiring a colorist or specializing in black-and-white art if they need to.

All you need to do is to decide whether you like their art.

-the Centaur

Pictured: some colorful stuff from my evening research / writing / art run.

[twenty twenty-four day thirty-three]: roll the bones

centaur 0

As both Ayn Rand and Noam Chomsky have both said in slightly different ways, concepts and language are primarily tools of thought, not communication. But cognitive science has demonstrated that our access to the contents of our thought are actually relatively poor - we often have an image of what is in our head which is markedly different from the reality, as in the case where we're convinced we remember a friend's phone number but actually have it wrong, or have forgotten it completely.

One of the great things about writing is that it forces you to turn these abstract ideas about our ideas into concrete realizations - that is, you may think you know what you think, but even if you think about it a lot, you don't really know the difference between your internal mental judgments about your thoughts and their actual reality. The perfect example is a mathematical proof: you may think you've proved a theorem, but until you write it down and check your work, there's no guarantee that you actually HAVE a proof.

So my recent article on problems with Ayn Rand's philosophy is a good example. I stand by it completely, but I think that many of my points could be refined considerably. I view Ayn Rand's work with regards to philosophy the way that I do Euclid for mathematics or Newton for physics: it's not an accurate model of the world, but it is a stage in our understanding of the world which we need to go through, and which remains profitable even once we go on to more advanced models like non-Euclidean geometry or general relativity. Entire books are written on Newtonian approximations to relativity, and one useful mathematical tool is a "Lie algebra", which enables us to examine even esoteric mathematical objects by looking at the locally at the Euclidean tangent space generated around a particular point.

So it's important to not throw the baby out with the bathwater with regards to Ayn Rand, and to be carefully specific about where her ideas work and where they fail. For example, there are many, many problems with her approach to the law of identity - the conceptual idea that things are what they are, or A is A - but the basic idea is sound. One would say that it almost approaches tautological except for the fact that many people seem to ignore it. However, you cannot fake reality in any way whatever - and you cannot make physical extrapolations about reality through philosophical analysis of a conceptual entity like identity.

Narrowing in on a super specific example, Rand tries to derive the law of causality from the law of identity - and it works well, right up unto the point where she tries to draw conclusions about it. Her argument goes like this: every existent has a unique nature due to the law of identity: A is A, or things are what they are, or a given existent has a specific nature. What happens to an existent over time - the action of that entity - is THE action of THAT entity, and is therefore determined by the nature of that entity. So far, so good.

But then Rand and Peikoff go off the rails: "In any given set of circumstances, therefore, there is only one action possible to an entity, the action expressive of its identity." It is difficult to grasp the level of evasion which might produce such a confusion of ideas: to make such a statement, one must throw out not just the tools of physics, mathematics and philosophy, but also personal experience with objects as simple as dice.

First, the evasion of personal experience, and how it plays out through mathematics and physics. Our world is filled with entities which may produce one action out of many - not just entities like dice, but even from Rand and Peikoff's own examples, a rattle makes a different sound every time you rattle it. We have developed an entire mathematical formalism to help understand the behavior of such entities: we call them stochastic and treat them with the tools of probability. As our understanding has grown, physicists have found that this stochastic nature is fundamental to the nature of reality: the rules of quantum mechanics essentially say that EVERY action of an entity is drawn from a probability distribution, but for most macroscopic actions this probabilistic nature gets washed out.

Next, the evasion of validated philosophical methods. Now, one might imagine Rand and Peikoff saying, "well, the roll of the dice is only apparently stochastic: in actuality, the dice when you throw it is in a given state, which determines the single action that it will take." But this is a projective hypothesis about reality: it is taking a set of concepts, determining their implications, and then stating how we expect those implications to play out in reality. Reality, however, is not required to oblige us. This form of philosophical thinking goes back to the Greeks: the notion that if you begin with true premises and proceed through true inference rules, you will end up with a true conclusion. But this kind of philosophical thinking is invalid - does not work in reality - because any one of these elements - your concepts, your inference rules, or your mapping between conclusions and states - may be specious: appearing to be true without actually reflecting the nuance of reality. To fix this problem, the major achievement of the scientific method is to replace "if you reach a contradiction, check your premises" with "if you reach a conclusion, check your work" - or, in the words of Richard Feynman, "The sole test of any idea is experiment."

Let's get really concrete about this. Rand and Peikoff argue "If, under the same circumstances, several actions were possible - e.g., a balloon could rise or fall (or start to emit music like a radio, or turn into a pumpkin), everything else remaining the same - such incompatible outcomes would have to derive from incompatible (contradictory) aspects of the entity's nature." This statement is wrong on at least two levels, physical and philosophical - and much of the load-bearing work is in the suspicious final dash.

First, physical: we actually do indeed live in a world where several actions are possible for an entity - this is one of the basic premises of quantum mechanics, which is one of the most well-tested scientific theories in history. For each entity in a given state, a set of actions are possible, governed by a probability amplitude over those states: when the entity interacts with another entity in a destructive way the probability amplitude collapses into a probability distribution over the actions, one of which is "observed". In Rand's example, the balloon's probability amplitude for rising is high, falling is small, emitting radio sounds is still smaller, and turning into a pumpkin is near zero (due to the vast violation of conservation of mass).

If one accepts this basic physical fact about our world - that entities that are not observed exist in a superposition of states governed by probability amplitudes, and that observations involve probabilistically selecting a next state from the resulting distribution - one can create amazing technological instruments and extraordinary scientific predictions - lasers and integrated circuits and quantum tunneling and prediction of physical variables with a precision of twelve orders of magnitude - a little bit like measuring the distance between New York and Los Angeles with an error less than a thousandth of an inch.

But Rand's statement is also philosophically wrong, and it gets clearer if we take out that distracting example: "If, under the same circumstances, several actions were possible, such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." What's wrong with this? There's no warrant to this argument. A warrant is the thing that connects the links in a reasoning chain - an inference rule in a formal system, or a more detailed explanation of the reasoning step in question.

But there is no warrant possible in this case, only a false lurking premise. The erroneous statement is that "such incompatible outcomes would have to derive from incompatible aspects of the entity's nature." Why? Why can't an entity's nature be to emit one of a set of possible actions, as in a tossed coin or a die? Answer: Blank out. There is no good answer to this question, because there are ready counterexamples from human experience, which we have processed through mathematics, and ultimately determined through the tools of science that, yes, it is the nature of every entity to produce one of a set of possible outcomes, based on a probability distribution, which itself is completely lawlike and based entirely on the entity's nature.

You cannot fake reality any way whatever: this IS the nature of entities, to produce one of a set of actions. This is not a statement that they are "contradictory" in any way: this is how they behave. This is not a statement that they are "uncaused" in any way: the probability amplitude must be non-zero in a space in order for an action to be observed, and it is a real physical entity with energy content, not merely a mathematical convenience, that leads to the observation. And it's very likely not sweeping under the rug some hidden mechanism that actually causes it: while the jury is still out on whether quantum mechanics is a final view of reality, we do know due to Bell's theorem that there are no "hidden variables" behind the curtain (a theorem that had been experimentally validated as of the time of Peikoff's book).

So reality is stochastic. What's wrong with that? Imagine a correct version of Ayn Rand's earlier statement: "In any given set of circumstances, therefore, there is only one type of behavior possible for an entity, the behavior expressive of its entity. This behavior may result in one of several outcomes, as in the rolling of a die, but the probability distribution over those set of outcomes is the distribution that is caused and necessitated by the entity's nature." Why didn't Peikoff and Rand write something like that?

We have a hint in the next few paragraphs: "Cause and effect, therefore, is a universal law of reality. Every action has a cause (the cause is the nature of the entity that acts); and the same cause leads to the same effect (the same entity, under the same circumstances, will perform the same action). The above is not to be taken as a proof of the law of cause and effect. I have merely made explicit what is known implicitly in the perceptual grasp of reality." That sounds great ... but let's run the chain backwards, shall we?

"We know implicitly in the perceptual grasp of reality a law which we might explicitly call cause and effect. We cannot prove this law, but we can state that the same entity in the same circumstances will perform the same action - that is, the same cause leads to the same effect. Causes are the nature of the entities that act, and every action has a cause. Therefore, cause and effect is a universal law of reality."

I hope you can see what's wrong with this, but if you don't, I'm agonna tell you, because I don't believe in the Socratic method as a teaching tool. First and foremost, our perceptual grasp of reality is very shaky: massive amounts of research in cognitive science reveal a nearly endless list of biases and errors, and the history of physics has been one of replacing erroneous perceptions with better laws of reality. One CANNOT go directly from the implicit knowledge of perceptual reality to any actual laws, much less universal ones: we need experiment and the tools of physics and cognitive science to do that.

But even from a Randian perspective this is wrong, because it is an argument from the primacy of consciousness. One of the fundamental principles of Objectivist philosophy is the primacy of existence over consciousness: the notion that thinking a thing does not make it so. Now, this is worth a takedown of its own - it is attempting to draw an empirically verifiable physical conclusion from a conceptual philosophical argument, which is invalid - but, more or less, I think Rand is basically right that existence is primary over consciousness. Yet above, Rand and Peikoff purport to derive a universal law from perceptual intuition. They may try to call it "implicit knowledge" but perception literally doesn't work that way.

If they admit physics into their understanding of the law of causality, they have to admit you cannot directly go from a conceptual analysis of the axioms to universally valid laws, but must subject all their so-called philosophical arguments to empirical validation. But that is precisely what you have to do if you are working in ontology or epistemology: you MUST learn the relevant physics and cognitive science before you attempt to philosophize, or you end up pretending to invent universal laws that are directly contradicted by human experience.

Put another way, whether you're building a bridge or a philosophy, you can't fake reality in any way whatsoever, or, sooner or later, the whole thing will come falling down.

-the Centaur

[twenty twenty-four day thirty-two]: if you do what you’ve always done

centaur 0
Something new

"If you do what you've always done, you'll get what you've always gotten," or so the saying goes.

That isn't always true - ask my wife what it's like for a paint company to silently change the formula on a product right when she's in the middle of a complicated faux finish that depended on the old formulas chemical properties - but there's a lot of wisdom to it.

It's also true that it's work to decide. When a buddy of mine and I finished 24 Hour Comic Day one year and were heading to breakfast, he said, "I don't want to go anyplace new or try anything new, because I have no brains left. I want to go to a Dennys and order something that I know will be good, so I don't have to think about it."

But as we age, we increasingly rely on past decisions - so-called crystallized intelligence, an increasingly vast but increasingly rigid collection of wisdom. If we don't want to get frozen, we need to continue exercising the muscle of trying things that are new.

At one of my favorite restaurants, I round-robin through the same set of menu items. But this time, I ildy flipped the menu over to the back page I never visit and saw a burrito plate whose fillings were simmered in beer. I mean, what! And the server claimed it was one of the best things on the menu, a fact I can confirm.

It can be scary to step outside our circle. But if you do what you've always done, you'll miss out on opportunities to find your new favorite.

-the Centaur

[twenty twenty-four day thirty-one]: to be or not to be in degree

centaur 0

I've recently been having fun with a new set of "bone conduction" headphones, walking around the nearby forest while listening to books on tape [er, CD, er, MP3, er, streaming via Audible]. Today's selection was from Leonard Peikoff's Objectivism: The Philosophy of Ayn Rand. Listening to the precision with which they define concepts is wonderful - it's no secret that I think Ayn Rand is one of the most important philosophers that ever lived - but at the same time they have some really disturbing blind spots.

And I don't mean in the political sense in which many people find strawman versions of Rand's conclusions personally repellent, and therefore reject her whole philosophy without understanding the good parts. No, I mean that, unfortunately, Ayn Rand and Leonard Peikoff frequently make specious arguments - arguments that on the surface appear logical, but which actually lack warrants for their conclusions. Many of these seem to be tied to a desire to appear objective emotionally by demanding an indefensibly precise base for their arguments, rather than standing the more solid ground of accurate, if fuzzier concepts, which actually exist in a broader set of structures which are more objective than their naive pseudo-objective counterparts.

Take the notion that "existence exists". Peikoff explains the foundation of Ayn Rand's philosophy to be the Randian axioms: existence, identity, and consciousness - that is, there is a world, things are what they are, and we're aware of them. I think Rand's take on these axioms is so important that I use her words to label two them in my transaxiomatic catalog of axioms: EE, "existence exists," AA, "A is A", and CC, where Rand doesn't have a catchy phrase, but let's say "creatures are conscious". Whether these are "true", in their view, is less important than that they are validated as soon as you reach the level of having a debate: if someone disagrees with you about the validity of the axioms, there's no meaningful doubt that you and they exist, that you're both aware of the axioms, and that they have a nature which is being disputed.

Except ... hang on a bit. To make that very argument, Peikoff presents a condensed dialog between the defender of the axioms, A, and a denier of the axioms, B, quickly coming to the conclusion that someone who exists, is aware of your opinions, and is disagreeing with their nature specifically by denying that things exist, that people are aware of anything, and that things have a specific nature is ... probably someone you shouldn't spend your time arguing with. At the very best, they're trapped in a logical error; at the worst, they're either literally delusional or arguing in bad faith. That all sounds good. But A and B don't exist.

More properly, the arguing parties A and B only exist as hypothetical characters in Peikoff's made-up dialog. And here's where the entire edifice of language-based philosophy starts to break down: what is existence, really? Peikoff argues you cannot define existence in terms of other things, but can only do so ostensively, by pointing to examples - but this is not how language works, either in day-to-day life or in philosophy, which is why science has abandoned language in favor of mathematical modeling. If you're intellectually honest, you should agree that Ayn Rand and Leonard Peikoff exist in a way that A and B in Peikoff's argument do not.

Think about me in relationship to Sherlock Holmes. I exist in a way that Sherlock Holmes does not. I also exist in a way which Arthur Conan Doyle does not. Sherlock Holmes himself exists in a way that an alternate version of Holmes from a hypothetical unproduced TV show does not, and I as a real concrete typing these words exists in a way that the generic idea of me does not. One could imagine an entire hierarchy of degrees of existence, from absolute nothingness of the absence of a thing or concept, to contradictions in terms that could be named but do not exist, to hypothetical versions of Sherlock Holmes that do not exist, to Sherlock Holmes, who only exists as a character, to Arthur Conan Doyle who once existed, to me who existed as of this writing, to the concrete me writing this now, to existence itself, which exists whether I do or not.

Existence is what Marvin Minsky calls a "suitcase word": it's a stand in for a wide variety of other distinct but usefully similar concepts, from conceptual entities to physical existents to co-occurring physical objects in the same interacting region of space-time. And it's no good attempting to fall back on the idea that Ayn Rand was actually trying to define "existence" as the sum total of "existents" because pinning down "existence" or "existent" outside of an ostensible "I can point at it" definition is precisely what Rand and Peikoff don't want to do - first off, because they really do mean it to be "everything", in almost the precise same way that Carl Sagan uses the word "Cosmos" to refer to everything that ever is, was, or will be, and secondly, because if it loses its function as a suitcase word, it is no longer useful in their arguments.

In reality, if you say "existence exists", and someone attempts to contradict you, it does you no good to say "well, you're contradicting yourself, because you had to exist to even say that". You do need to actually put your money where your mouth is and say what concrete propositions you intend to draw from the terms "existence" and "exists" and the floating abstraction "existence exists" - and so do they. If you can't do this, you're not actually arguing with them; you're talking past them; if they can't do this, they're at best not arguing coherently, and at worst not arguing in good faith. If you both DO this, however, you may come to profitable conclusions, such as, "yes, we agree that SOMETHING exists, at least to the level where we had this debate; but we can also agree that the word existence should not extend to this unwanted implication."

This approach - reinforcing your axioms with sets of elaborations, models and even propositions that are examples of of the axioms, along with similar sets that should be considered counterexamples - is what I call the "transaxiomatic" approach. Rather than simply assuming the axioms are unassailable and attempting to pseudo-define their terms by literally waving one's hand around and saying "this is what I mean by existence" - and simply hoping people will "get it" - we need to reinforce the ostensible concretes we use to define the axioms with more carefully refined abstractions that tell us what we mean when we use the terms in the axioms, and what propositions we hope people should derive from it.

This is part of an overall move from the philosophical way of tackling problems towards a more scientific one. And it's why I think Ayn Rand was, in a sense, too early, and too late. She's too early in the sense that many of the things that she studied philosophically - ontology and epistemology - are no longer properly the domain of philosophy, but have been supplanted - firmly supplanted - by findings from science - ontology is largely subsumed into physics and cosmology, and epistemology is largely subsumed into cognitive science and artificial intelligence. That's not to say that that philosophy is done with those areas, but instead that philosophy has definitively lost its primary position within them: one must first learn the science of what is known in those areas before trying to philosophize about it. One cannot meaningfully say anything at all about epistemology without understanding computational learning theory. And she's too late in that she was trying to DO philosophy at a point in time where her subject matter was already starting to become science. Introduction to Objectivist Epistemology is an interesting book, but it was written a decade after "The Magical Number Seven, Plus or Minus Two" and two decades before the "Probably Approximately Correct" theory of learning, and you will learn much more about epistemology by looking up the "No Free Lunch" learning theorems and pulling on that thread than by anything Ayn Rand ever wrote (or, try reading "Probability Theory: The Logic of Science" for a good one-volume starting point). Which is not to say that Ayn Rand's philosophizing is not valuable - it is almost transcendently valuable - but if she was writing today, many of the more conceptually problematic structures of her philosophy could simply be dropped in favor of references to the rich conceptual resources of cognitive science and probability theory, and then she could have gotten on with convincing people that you can indeed derive "ought" from "is".

Or, maybe, just maybe, she might have done science in addition to philosophy, and perhaps even had something scientific to contribute to the great thread rolling forward from Bayes and Boole.

Existence does exist. But before you agree, ask, "What do you really mean by that?"

-the Centaur

Pictured: Loki, existing in a fuzzy state.

[twenty twenty-four day thirty]: the questions i now ask

centaur 0

As a writer, it's important to have humility - no matter how enthusiastic you are about your work, there's no guarantee that it will land the way that you want it to with your readers. So I share my stories with "beta readers" who are, presumably, the kind of people who like to read what I want to write, and I use comments from beta readers to help me edit my stories before submitting them to editors or publishers.

I used to ask almost no questions of the beta readers BEFORE they read it, as I neither wanted to prejudice them about the story nor wanted to draw their attention to features that they might not have noticed. But, over time, I have started adding questions - perhaps in part because my research in social robot navigation exposed me to better ways to ask questions of people, and perhaps just through my own experience.

I settled on the following questions that I ask beta readers:

  • Is this the kind of story you like to read?
  • What did you like about it?
  • How could it be improved?
  • Would you like to read more stories in the same universe?
  • Is there anything that could be clarified to make it stand better alone?
  • Are there any questions that it raised that you'd love to see answered in another story?

The first three I think are generic to all stories, and are the ones that I started with:

  • First, if your story isn't the kind of story that your reader wants to read, their comments might not be about your story per se, but may actually be a subconscious critique of its genre, which can be actively misleading if you try to apply them to a story in that genre. I found this out the hard way when I gave The Clockwork Time Machine to someone who didn't like steampunk - many of their comments were just dissing the entire genre, and were useless for figuring out how to improve my particular story.
  • Second, it's important to know what people like about a story, so that you don't accidentally break those things in your edits. If one person dislikes something, but two others like it, you might be better off leaving that alone or gently tweaking it rather than just taking it out.
  • Third, no matter how big your ego is, you cannot see all the things that might be wrong with your story. (Unless you've won the Nobel Prize in literature or are a New York Times bestselling author, in which case, I especially mean you, because you've probably become uneditable). Fresh eyes can help you see what's wrong and where you could make it better.

But these questions weren't enough for someone who writes series fiction: my stories refer to a lot of background information, and set up ideas for other stories, yet should stand alone as individual stories:

  • Do you have a good vehicle? Have you set up a framework for telling stories that people are interested in? This goes beyond whether an individual story is satisfying, and to whether the setting and storytelling method itself are interesting.
  • Does your story stand alone? Are you pulling in backstory which is not adequately explained? This is information that should either be taken out, or woven into the story so it is load-bearing.
  • Does your story pull people in? Even if the story stands alone, you want it to either hint at questions to be answered in other stories or to answer questions from previous stories.

So far, these questions have worked well for me and my science fiction serial stories. Your mileage may vary, but I think that if you avoid asking anything specific about your story, and focus on the general functions that your story should fulfill, then you can get a lot of profit by asking beta readers ahead of the read.

-the Centaur

Pictured: A gryphon made of books in a store window in Asheville.

[twenty twenty-four day twenty-nine]: phantom enemies

centaur 0

"I'ma gonna get that bird in the mirror, I swear, this is my territory, I'll show him---BONK!"
"Okay, this time for sure---BONK!"
"Tenth time's the charm---BONK!"

Not even putting up a screen in front of the mirror has helped; our little friend just hopped down onto the stairs of the cat condo (that "table" is a cat condo with a re-purposed glass tabletop, to give one of our now-passed older cats a place to sit and see the stars while shielding him from the rain) and started bonking the lower section of the mirror.

There's no reasoning with some people.

-the Centaur

P.S. Yes, I am making a direct comparison of people whose political beliefs are built around their persecution by imaginary enemies to a bird not smart enough to recognize his own reflection, why?

[twenty twenty-four day twenty-eight]: yeah there were a few

centaur 0

We got a LOT of submissions for the Neurodiversiverse. Many were actually on topic! Some, however, despite being well written, were not. And we really want this anthology to follow its theme of empowering stories of neurodivergent people encountering mentally diverse aliens, so we're focusing on that - and already have several strong stories that we know where we want to place in the story sequence.

Onward!

-the Centaur

[twenty twenty-four day twenty-six]: make up your mind

centaur 0

Cat, when it's raining: "Let me out! Let me out! But not this door, it's wet. Let's try another door. And another! Or another! I gotta get out! Just hold the door open until the rain stops!"

Also cat, when it is nice and sunny: "Who cares about going outside? Ima gonna havva nap."

-the Centaur

Pictured: the cat-shaped void, Loki, actually using his void-colored cat tree for once. Image taken in infrared bands and color enhanced by NASA to show surface detail.

[twenty twenty-four day twenty-five]: called it, again

centaur 0

I'm not confident about my ability to predict the future, but some things I can see coming. When people started moving towards using streaming services, I said it was only a matter of time until a large chunk of people lost the libraries that they paid for due to mergers and acquisitions - and it's started happening with Playstation owners losing chunks of their libraries. This is only going to get worse, as with streaming you don't "own" anything - you're just paying for the illusion that you'll be able to access the content you want.

And next, after Paramount canceled Star Trek: Discovery and booted Star Trek: Prodigy off their network and shuffled off the movies, I predicted Paramount would lose Star Trek altogether before I'd even watched all of the Star Trek in my subscription (which is why I got Paramount Plus, or whatever it's called this week). And, while I can't predict the future, this too is also being openly discussed.

The golden age of television has come to an end - I date it from roughly Sopranos to Star Trek: Strange New Worlds, though the actual death date was the Warner / Discovery merger and the axing of shows for tax reasons. But the real reason was the greedy corporate slimes in charge of the studios, figures like Bob Iger whose potential $27 million compensation belies his claims that striking writer's demands weren't realistic, even though his fellow leaders now admit the writers were basically right.

Streaming as we know it isn't going away - it's too convenient for too many people. But it's also going to collapse as we know it, and things will appear to get worse before they get better. Overall, we may come out the other side with a stronger set of shows: there's a period of time I used to think of as "the dark age of sci-fi television" when Enterprsise was struggling, Babylon 5 was canceled and you'd be hard pressed to find Andromeda on the airwaves; but the same period produced Battlestar and Firefly.

So don't give up hope, but don't think we'll avoid tectonic shifts.

-the Centaur

[twenty twenty-four day twenty-four]: in foggiest depths

centaur 0

One of the problems with computing is when it just gets ... foggy. Not when you're trying to do something hard, or when two pieces of software are incompatible, no. When things just sort of kind of don't work, and there are no known reasons that it's happening, and no reliable actions you can take to fix it.

Once this happened to me when I was working on a robotics device driver, and I realized the lidar itself was unreliable, so the only way to fix problems was to run each configuration ten times and keep average stats. Broken "worked" around ten percent of the time, whereas "fixed" worked around seventy percent of the time (approaching the rate at which the manufacturer's own software could connect to its own hardware).

Today, I ran into a seemingly simple problem with Anaconda, a Python package / environment management system. Conda lets you corral Python and other software into "environments" with different configurations so that potentially incompatible versions can be used on the same computer (albeit, not at the same time). It even gives you a handy indication about which environment is in use in your command prompt, like so:

There's a seemingly innocent blank line between (ThatEnvironment) and the previous line, yes? Not part of the standard Conda setup, but you can easily add it with a single line of configuration, changing the "env_prompt" to include an extra newline "\n" before printing the environment, like so:

Yeah, that line at the end. "env_prompt: \n({default_env})". In a conda configuration - a .condarc, or "dot condarc" file - which is almost as simple as possible. I don't even think the "channels" bit is needed - I didn't recall writing it, I think it just got added automatically by Conda. So this is almost the simplest possible change that you could make to your Conda configuration, done in almost the simplest possible way.

Except. It. Didn't. Take.

No matter what changes I made to the .condarc file, they didn't affect my Conda configuration. Why? I don't know. No matter what I did, nothing happened. I changed the prompt to all sorts of weird things to try to see if maybe my syntax was wrong, no dice. No amount of searching through manuals or documentation or Stack Overflow helped. I re-ran conda config, re-loaded my shell, rebooted my Ubuntu instance - nothing.

Finally, almost in desperation, I went back to my original version, and tried creating system-wide, then environment-specific configurations - and then the changes to the prompt started working. Thank goodness, I thought, and rebooted one more time, convinced I had solved the problem.

Except. It. Took. The. Wrong. Config.

Remember how I said I created a weird version just to see that it was working? Conda started reverting to that file and using it, even though it was several versions ago. It actively started overwriting my changes - and ignoring the changes in the environment-specific configurations.

So, I blew away all the versions of the file - local, system and environment-specific - and re-created it, in its original location, and then it started to work right. In the end, what was the final solution?

I have no idea.

When I started working on the problem, I wanted Conda to do a thing - print an extra blank line so I could more easily see a command and its result, separate from the next command and result. And so I created a file in the recommended place with a line containing the recommended magic words ... and it didn't work. Then I hacked on it for a while, it sort of started working, and I backed out my changes, creating a file in the same recommended place with a line containing the same recommended magic words ... and it did work.

Why? Who knows! Will it keep working? Who knows! If it breaks again, how do I fix it? Who knows!

This is what I call "the fog". And it's the worst place to be when working on computers.

-the Centaur

Pictured: Sure was foggy today.

[twenty twenty-four day twenty-three]: and for the record …

centaur 0

... it's still one of the worst feelings in the world to turn back the sheets at the end of a long day, only to realize you hadn't blogged or posted your drawing. I had a good excuse yesterday - my wife and I were actually out at a coffeehouse, working on our art, when we had a sudden emergency and had to go home.

I had just finished my drawing and was about to snapshot it so I could post it, but instead threw the notebook into my bookbag, packed it up, and drove us home. Disaster was averted, fortunately, but the rest of the day was go-go-go, until finally, exhausted, I went to turn in and then went ... oh, shit. I didn't blog.

Fortunately, I didn't have to go back to the drawing board. But it did flip over to tomorrow while I was posting ... so, next day's post, here we come.

-the Centaur

Pictured: A jerky shot of me trying to document my wife's computer setup for reference.

[twenty twenty-four day twenty-one]: it’s too cold to be stingy

centaur 0

Look, I get it: giving money to pandhandlers is not necessarily the best way to help lift people out of homelessness, and can often be counterproductive. Out of all the money that I've given to people, I'd say one out of three of them I could tell benefited from it (for example, one guy immediately bought food), one third were scammers (for example, one "hungry" guy immediately bought alcohol), and one third, I dunno. That's one reason that signs like this go up in public squares all across the country:

But look at the kind of day that this sign was having. It didn't get above freezing until noon. It's too damn cold to be stingy to people who ask for things from you. Jesus said "Give to all those who beg of you" and while sometimes we can't follow that advice given the context, yesterday was not one of those days.

This is part of a whole trend of "hostile architecture" where we structure our societies to make things difficult for people who are homeless - closing the parks, making benches hard to sleep on, stealing the possessions of the homeless (either as a condition of going into a homeless shelter, or outright theft by the police) and eliminating low-cost housing that could provide a path out for the homeless.

I'm not sure what the right answer is, but when it's fifteen below freezing, the right answer is not "no".

-the Centaur

[twenty twenty-four day twenty]: cat-shaped void

centaur 0

We have a black cat, so we got a black cat condo (just barely visible to the left). But of course, our cat-shaped void is a cat, and so prefers the blue couch, where its voluminous shedded fur is easily visible. My wife caught him in the act, so, enjoy this picture of our cat-shaped void, doing cat-styled things.

-the Centaur

Pictured: Loki on our couch. Interestingly, this picture was taken at an angle, so I rotated it, then used Adobe Photoshop's generative fill to recover the outer edge of the picture. The very outer edge is ... mostly right. Some weirdness is visible in the carpet patterns on the lower left, the brick pattern on the upper left, and whatever it is on the table on the right isn't there in reality. Otherwise, not a terrible job.

[twenty twenty-four day nineteen]: our precious emotions

centaur 0

It's hard to believe nowadays, but the study of psychology for much of the twentieth century was literally delusional. The first half was dominated by behaviorism, a bad-faith philosophy of psychology - let's not stoop to calling it science - which denied the existence of internal mental states. Since virtually everyone has inner mental life, and it's trivial to design an experiment which relies on internal mental reasoning to produce outcomes, it's almost inconceivable that behaviorism lasted as long as it did; but, it nevertheless contributed a great understanding of stimulus-response relationships to our scientific knowledge. That didn't mean it wasn't wrong, and by the late twentieth century, it had been definitively refuted by cognitive architecture studies which modeled internal mental behavior in enough detail to predict what brain structures were involved with different reasoning phenomena - structures later detected in brain scans.

Cognitive science had its own limits: while researchers such as myself grew up with a very broad definition of cognition as "the processes that the brain does when acting intelligently," many earlier researchers understood the "cognitive" in "cognitive psychology" to mean "logical reasoning". Emotion was not a topic which was well understood, or even well studied, or even thought of as a topic of study: as best I can reconstruct it, the reasoning - such as it was - seems to have been that since emotions are inherently subjective - related to a single subject - then the study of emotions would also be subjective. I hope you can see that this is just foolish: there are many things that are inherently subjective, such as what an individual subject remembers, which nonetheless can be objectively studied across many individual subjects, to illuminate solid laws like the laws of recency, primacy, and anchoring.

Now, in the twenty-first century, memory, emotion and consciousness are all active areas of research, and many researchers argue that without emotions we can't reason properly at all, because we become unable to adequately weigh alternatives. But beyond the value contributed by those specific scientific findings is something more important: the general scientific understanding that our inner mental lives are real, that our feelings are important, and that our lives are generally better when we have an affective response to the things that happen to us - in short, that our emotions are what make life worth living.

-the Centaur

[twenty twenty-four day eighteen]: something clever (un)evaporates

centaur 0

Don't you hate it when you think of something clever to say, but forget to write it down? I do. My wife and I were having a discussion and I came up with some very clever statement of the form "if people do this, they don't end up doing that", but now I can't remember it, so please enjoy this picture of a cat sending an email.

Just a moment. Just a moment.

"If you haven't climbed a mountain before, thinking about what you'll do when you get there is a distraction from starting the journey towards it. Climbing a mountain seems hard, but they're only a few miles high, and perhaps ten times that wide; most of your journey towards it will be on the plain, and that deceptively level terrain is the hardest part. Speculating about what parka to wear on the upper slopes does nothing to get you walking towards that slope; set out on your journey, and you can buy a parka when you're closer."

This bit of armchair wisdom was designed to encapsulate why it's better to start work on your business than it is to speculate on how to grow it into a multibillion-dollar conglomerate. Sure, it's great to have a grand vision, but you don't need to worry about mergers and acquisitions before you've found any customers - if you've never built a business before, that is.

If you are someone who has built many businesses, it's okay to build on your experience to guide your steps - but most of us have not, and our grand dreams can actively get in the way of figuring out how to make your product, how to get it in front of your customers, and how to make your product excel in their eyes so that they choose you over the alternatives.

Phew. Strangely enough, that first image was load-bearing: I picked a "random" recent picture for this blog, but it so turned out that our cat had been playing with his catnip laptop right around the time that Sandi and I had been discussing strategies for startups.

Feed your memory with enough cues, sometimes you get a retrieval.

Cogsci out.

-the Centaur

Pictured: Loki, sending emails on his catnip laptop, and resting on his laurels after a hard day at work.

[twenty twenty-four day seventeen]: we’re stronger with each other

centaur 0

No, this isn't a post about family, though it could easily be adapted to that topic. Nor is it a post about generic togetherness - that's why I said "each other" instead in the title. No, this is a post about how we're often stronger when we take advantage of the strengths of those around us.

Often at work we have our own perspective, and it can be easy to get caught up in making sure that our way is the way that's chosen, and our work is the work that is credited. But if we do, we may miss out on great suggestions from our coworkers, or on the opportunity to benefit from the work of others.

Just today at one of my contracting jobs, I had to present our work on the project so far. While most of the machine learning work on the project was mine, a lot of the foundational analysis on the data was done by one of my coworkers - and I called him out specifically when presenting his graphs.

Then, we came to the realization that collecting the amount of data we would ideally like to have to learn on would literally cost millions of dollars. I presented a few ways out of this dilemma - but then, one of our senior engineers spoke up, trying to brainstorm a simpler solution to the problem.

I'd been hoping that he would speak up - he had shown deep insight earlier in the project, and now, after a few minutes of brainstorming, he came up with a key idea which might enable us to use the software I've already written with the data we've already collected, saving us both time and money.

Afterwards, the coworker whose contributions I'd called out during the meeting hung on the call, trying to sketch out with me how to implement the ideas the senior engineer had contributed. Then, unprompted, he spent an hour or so sending me a sketch of an implementation and a few sample data files.

We got much farther working together and recognizing each others' contributions than we ever would have had we all been coming to the table just with what we brought on our own.

-the Centaur

Pictured: friends and family gathering over the holidays.

[twenty-twenty four day sixteen]: blog early, blog often

centaur 0

I'm a night owl - I'd say "extreme night owl", but my wife used to go to bed shortly before I woke up - and get some of my best work done late at night. So it constantly surprises me - though it shouldn't - that some things are easier to do earlier in the day.

Take blogging - or drawing every day, two challenges I've taken on for twenty-twenty four. Sometimes I say that "writer's block is the worst feeling in the world" - Hemingway apparently killed himself over it - but right up there with writer's block is deciding to call it a night after a long, productive evening of work - and remembering that you didn't draw or blog at all that day.

Sure, you can whip up a quick sketch, or bang out a few words. But doing so actively discourages you from longer-form thought or more complicated sketches. Drawing breathes more earlier in the day, especially in the midafternoon when your major initial tasks are done and the rest of the day seems wide open. And blogging is writing too, and can benefit as much from concentrated focus as any other writing.

SO! Let's at least get one of those two things done right now.

Type Enter, hit Publish.

-the Centaur

Pictured: Downtown Greenville as seen from the Camperdown complex.

[twenty twenty-four day fifteen]: photographic archaeology

centaur 0

I take a heck of a lot of pictures, seemingly way more than most of the people I know other than the ones in the movie industry; in fact, one of my friends once said "your phone eats first". But there's a secret to why I take pictures: it's for something, for the creation of an external memory - and memory is my brand, after all. With those photographs, I can figure out what happened in the past, even sometimes obscure things - like the attachment point of this lightsaber, which isn't just the diamond-shaped piece of wood, but also includes two hooks that seem to have disappeared in the move.

We may not find them, but at least now we know what to look for.

How can you turn the things in your life into an unexpected resource?

-the Centaur

Pictured: the old library, which was very nice, but not as nice as this one: