Press "Enter" to skip to content

Posts published in “Computing”

The art and science of mechanized thought.

… and may I say,

centaur 0

the amount of work needed to put up that one-word, one-image blogpost was entirely out of proportion to the amount of benefit involved. I have fixed site errors with fewer hoops than it took to publish something via the WordPress app, and the fix was actually uninstalling and reinstalling the app, which apparently had gotten into some kind of cruftly state in which it could no longer upload posts.

To be clear, I'm not picking on WordPress here. But I have a Ph.D in Artificial Intelligence and used to work on the front end of Google search. If I can't post a one-word, one-image post on the world's most popular blogging platform using their own easy-to-use official phone app, how are people who have not spent thirty-plus years in the industry supposed to get any work done?

This experience I just had - almost the simplest possible post not uploading after a few minutes - in another industry would be like ... like .. like picking up a hammer and nailing one nail into a piece of wood, only to find the nails popping out a minute later and flying across the room. You ask your carpenter buddy, "what gives," and they say, "Oh, that. You've got hammer voodoo going on there. Just take the hammer back to Home Depot, return it, and buy a new one. Then the nail will go in just fine."

You know what? I'm going to learn from this.

I will endeavor to make the robots less irritating when something goes wrong.

-the Centaur

P.S. AAAA! And this post didn't publish because the interface threw up an extra dialog box after I tried to publish, asking, "Are you sure?" I'm sure I didn't need you throwing up that extra dialog box AFTER I left the page so I spent time looking for it on the home page when it hadn't actually published at all. Aaaa!

It is not like riding a bike.

RIP Jeff Bezos (and/or Richard Branson)

centaur 0
rip jeff bezos

You know, Jeff Bezos isn’t likely to die when he flies July 20th. And Richard Branson isn’t likely to die when he takes off at 9am July 11th (tomorrow morning, as I write this). But the irresponsible race these fools have placed them in will eventually get somebody killed, as surely as Elon Musk’s attempt to build self-driving cars with cameras rather than lidar was doomed to (a) kill someone and (b) fail. It’s just, this time, I want to be caught on record saying I think this is hugely dangerous, rather than grumbling about it to my machine learning brethren.

Whether or not a spacecraft is ready to launch is not a matter of will; it’s a matter of natural fact. This is actually the same as many other business ventures: whether we’re deciding to create a multibillion-dollar battery factory or simply open a Starbucks, our determination to make it succeed has far less to do with its success than the realities of the market—and its physical situation. Either the market is there to support it, and the machinery will work, or it won’t.

But with normal business ventures, we’ve got a lot of intuition, and a lot of cushion. Even if you aren’t Elon Musk, you kind of instinctively know that you can’t build a battery factory before your engineering team has decided what kind of battery you need to build, and even if your factory goes bust, you can re-sell the land or the building. Even if you aren't Howard Schultz, you instinctively know it's smarter to build a Starbucks on a busy corner rather than the middle of nowhere, and even if your Starbucks goes under, it won't explode and take you out with it.

But if your rocket explodes, you can't re-sell the broken parts, and it might very well take you out with it. Our intuitions do not serve us well when building rockets or airships, because they're not simple things operating in human-scaled regions of physics, and we don't have a lot of cushion with rockets or self-driving cars, because they're machinery that can kill you, even if you've convinced yourself otherwise.

The reasons behind the likelihood of failure are manyfold here, and worth digging into in greater depth; but briefly, they include:

  • The Paradox of the Director's Foot, where a leader's authority over safety personnel - and their personal willingness to take on risk - ends up short-circuiting safety protocols and causing accidents. This actually happened to me personally when two directors in a row had a robot run over their foot at a demonstration, and my eagle-eyed manager recognized that both of them had stepped into the safety enclosure to question the demonstrating engineer, forcing the safety engineer to take over audience questions - and all three took their eyes off the robot. Shoe leather degradation then ensued, for both directors. (And for me too, as I recall).
  • The Inexpensive Magnesium Coffin, where a leader's aesthetic desire to have a feature - like Steve Job's desire for a magnesium case on the NeXT machines - led them to ignore feedback from engineers that the case would be much more expensive. Steve overrode his engineers ... and made the NeXT more expensive, just like they said it would, because wanting the case didn't make it cheaper. That extra cost led to the product's demise - that's why I call it a coffin. Elon Musk's insistence on using cameras rather than lidar on his self-driving cars is another Magnesium Coffin - an instance of ego and aesthetics overcoming engineering and common sense, which has already led to real deaths. I work in this precise area - teaching robots to navigate with lidar and vision - and vision-only navigation is just not going to work in the near term. (Deploy lidar and vision, and you can drop lidar within the decade with the ground-truth data you gather; try going vision alone, and you're adding another decade).
  • Egotistical Idiot's Relay Race (AKA Lord Thomson's Suicide by Airship). Finally, the biggest reason for failure is the egotistical idiot's relay race. I wanted to come up with some nice, catchy parable name to describe why the Challenger astronauts died, or why the USS Macon crashed, but the best example is a slightly older one, the R101 disaster, which is notable because the man who started the R101 airship program - Lord Thomson - also rushed the program so he could make a PR trip to India, with the consequence that the airship was certified for flight without completing its endurance and speed trials. As a result, on that trip to India - its first long distance flight - the R101 crashed, killing 48 of the 54 passengers - Lord Thomson included. Just to be crystal clear here, it's Richard Branson who moved up his schedule to beat Jeff Bezos' announced flight, so it's Sir Richard Branson who is most likely up for a Lord Thomson's Suicide Award.

I don't know if Richard Branson is going to die on his planned spaceflight tomorrow, and I don't know that Jeff Bezos is going to die on his planned flight on the 20th. I do know that both are in an Egotistical Idiot's Relay Race for even trying, and the fact that they're willing to go up themselves, rather than sending test pilots, safety engineers or paying customers, makes the problem worse, as they're vulnerable to the Paradox of the Director's Foot; and with all due respect to my entire dot-com tech-bro industry, I'd be willing to bet the way they're trying to go to space is an oversized Inexpensive Magnesium Coffin.

-the Centaur

P.S. On the other hand, when Space X opens for consumer flights, I'll happily step into one, as Musk and his team seem to be doing everything more or less right there, as opposed to Branson and Bezos.

P.P.S. Pictured: Allegedly, Jeff Bezos, quick Sharpie sketch with a little Photoshop post-processing.

The Embodied AI Workshop is Tomorrow, Sunday, June 20th!

centaur 0
embodied AI workshop

What happens when deep learning hits the real world? Find out at the Embodied AI Workshop this Sunday, June 20th! We’ll have 8 speakers, 3 live Q&A sessions with questions on Slack, and 10 embodied AI challenges. Our speakers will include:

  • Motivation for Embodied AI Research
    • Hyowon Gweon, Stanford
  • Embodied Navigation
    • Peter Anderson, Google
    • Aleksandra Faust, Google
  • Robotics
    • Anca Dragan, UC Berkeley
    • Chelsea Finn, Stanford / Google
    • Akshara Rai, Facebook AI Research
  • Sim-2-Real Transfer
    • Sanja Fidler, University of Toronto, NVIDIA
      Konstantinos Bousmalis, Google

You can find us if you’re signed up to #cvpr2021, through our webpage embodied-ai.org or at the livestream on YouTube.

Come check it out!

-the Centaur

He thinks he invented Java because he was in the room when someone made coffee

taidoka 0

... came up as my wife and I were discussing the "creative hangers-on form" of Stigler's Law. The original Stigler's Law, discovered by Roger Merton and popularized by Stephen Stigler, is the idea that in science, no discovery is named after its original discoverer.

In creative circles, it comes up when someone who had little or nothing to do with a creative process takes credit for it. A few of my wife's friends were like this, dropping by to visit her while she was in the middle of a creative project, describing out loud what she was doing, then claiming, "I told her to do that."

In the words of Finn from The Rise of Skywalker: "You did not!"

In computing circles, the old joke referred to the Java programming language. I've heard several variants, but the distilled version is "He thinks he invented Java because he was in the room when someone made coffee."  Apparently this is a good description of how Java itself was named, down to at least one person  claiming they came up with the name Java and others disputing that, even suggesting that they opposed it, claiming instead that someone else in the room was responsible - while that person in turn rejected the idea, noting only that there was some coffee in the room from Peet's.

Regardless, I dispute Howard Aiken's saying "Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." Nah. Once you've forced an idea down someone's throat, they won't just swallow it, they'll claim it was in their stomach all along.

-the Centaur

The Embodied AI Workshop at CVPR 2021

centaur 0
embodied AI workshop

Hail, fellow adventurers: to prove I do something more than just draw and write, I'd like to send out a reminder of the Second Embodied AI Workshop at the CVPR 2021 computer vision conference. In the last ten years, artificial intelligence has made great advances in recognizing objects, understanding the basics of speech and language, and recommending things to people. But interacting with the real world presents harder problems: noisy sensors, unreliable actuators, incomplete models of our robots, building good simulators, learning over sequences of decisions, transferring what we've learned in simulation to real robots, or learning on the robots themselves.

interactive vs social navigation

The Embodied AI Workshop brings together many researchers and organizations interested in these problems, and also hosts nine challenges which test point, object, interactive and social navigation, as well as object manipulation, vision, language, auditory perception, mapping, and more. These challenges enable researchers to test their approaches on standardized benchmarks, so the community can more easily compare what we're doing. I'm most involved as an advisor to the Stanford / Google iGibson Interactive / Social Navigation Challenge, which forces robots to maneuver around people and clutter to solve navigation problems. You can read more about the iGibson Challenge at their website or on the Google AI Blog.

the iGibson social navigation environment

Most importantly, the Embodied AI Workshop has a call for papers, with a deadline of TODAY.

Call for Papers

We invite high-quality 2-page extended abstracts in relevant areas, such as:

  •  Simulation Environments
  •  Visual Navigation
  •  Rearrangement
  •  Embodied Question Answering
  •  Simulation-to-Real Transfer
  •  Embodied Vision & Language

Accepted papers will be presented as posters. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences.

The submission deadline is May 14th (Anywhere on Earth). Papers should be no longer than 2 pages (excluding references) and styled in the CVPR format. Paper submissions are now open.

I assume anyone submitting to this already has their paper well underway, but this is your reminder to git'r done.

-the Centaur

A Bayesian Account of Miracles

centaur 0
bayes headshot

Christianity is a tall ask for many skeptically-minded people, especially if you come from the South, where a lot of folks express Christianity in terms of having a close personal relationship with a person claimed to be invisible, intangible and yet omnipresent, despite having been dead for 2000 years.

On the other hand, I grew up with a fair number of Christians who seem to have no skeptical bones at all, even at the slightest and most explainable of miracles, like my relative who went on a pilgrimage to the Virgin Mary apparitions at Conyers and came back "with their silver rosary having turned to gold."

Or, perhaps - not to be a Doubting Thomas - it was always of a yellowish hue.

Being a Christian isn't just a belief, it's a commitment. Being a Christian is hard, and we're not supposed to throw up stumbling blocks for other believers. So, when I encounter stories like these, which don't sound credible to me and which I don't need to support my faith, I often find myself biting my tongue.

But despite these stories not sounding credible, I do nevertheless admit that they're technically possible. In the words of one comedian, "The Virgin Mary has got the budget for it," and in a world where every observed particle event contains irreducible randomness, God has left Himself the room He needs.

But there's a long tradition in skeptical thought to discount rare events like alleged miracles, rooted in  Enlightenment philosopher David Hume's essay "Of Miracles". I almost wrote "scientific thought", but this idea is not at all scientific - it's actually an injection of one of philosophy's worst sins into science.

Philosophy! Who needs it? Well, as Ayn Rand once said: everyone. Philosophy asks the basic questions What is there? (ontology), How do we know it? (epistemology), and What should we do? (ethics). The best philosophy illuminates possibilities for thought and persuasively argues for action.

But philosophy, carving its way through the space of possible ideas, must necessarily operate through arguments, principally verbal arguments which can never conclusively convince. To get traction, we must move beyond argument to repeatable reasoning - mathematics - backed up by real-world evidence.

And that's precisely what was happening right as Hume was working on his essay "Of Miracles" in the 1740's: the laws of probability and chance were being worked out by Hume's contemporaries, some of whom he corresponded with, but he couldn't wait - or couldn't be bothered to learn - their real findings.

I'm not trying to be rude to Hume here, but making a specific point: Hume wrote about evidence, and people claim his arguments are based in rationality - but Hume's arguments are only qualitative, and the quantitative mathematics of probability being developed don't support his idea.

But they can reproduce his idea, and the ideas of the credible believer, in a much sounder framework.

In all fairness, it's best not to be too harsh with Hume, who wrote "Of Miracles" almost twenty years before Reverend Thomas Bayes' "An Essay toward solving a Problem in the Doctrine of Chances," the work which gave us Bayes' Theorem, which became the foundation of modern probability theory.

If the ground is wet, how likely is it that it rained? Intuitively, this depends on how likely it is that the rain would wet the ground, and how likely it is to rain in the first place, discounted by the chance the ground would be wet on its own, say from a sprinkler system.

In Greenville, South Carolina, it rains a lot, wetting the ground, which stays wet because it's humid, and sprinklers don't run all the time, so a wet lawn is a good sign of rain. Ask that question in Death Valley, with rare rain, dry air - and you're watering a lawn? Seriously? - and that calculus changes considerably.

Bayes' Theorem formalizes this intuition. It tells us the probability of an event given the evidence is determined by the likelihood of the evidence given the event, times the probability of the event, divided by the probability of the evidence happening all by its lonesome.

Since Bayes's time, probabilistic reasoning has been considerably refined. In the book Probability Theory: The Logic of Science, E. T. Jaynes, a twentieth-century physicist, shows probabilistic reasoning can explain cognitive "errors," political controversies, skeptical disbelief and credulous believers.

Jaynes's key idea is that for things like commonsense reasoning, political beliefs, and even interpreting miracles, we aren't combining evidence we've collected ourselves in a neat Bayesian framework: we're combining claims provided to us by others - and must now rate the trustworthiness of the claimer.

In our rosary case, the claimer drove down to Georgia to hear a woman speak at a farmhouse. I don't mean to throw up a stumbling block to something that's building up someone else's faith, but when the Bible speaks of a sign not being given to this generation, I feel like its speaking to us today.

But, whether you see the witness as credible or not, Jaynes points out we also weigh alternative explanations. This doesn't affect judging whether a wet lawn means we should bring an umbrella, but when judging a silver rosary turning to gold, there are so many alternatives: lies, delusions, mistakes.

Jaynes shows, with simple math, that when we're judging a claim of a rare event with many alternative explanations, our trust in the claimer that dominates the change in our probabilistic beliefs. If we trust the claimer, we're likely to believe the claim; if we distrust the claimer, we're likely to mistrust the claim.

What's worse, there's a feedback loop between the trust and belief: if we trust someone, and they claim something we come to believe is likely, our trust in them is reinforced; if we distrust someone, and they claim something we come to believe is not likely, our distrust of them is reinforced too.

It shouldn't take a scientist or a mathematician to realize that this pattern is a pathology. Regardless of what we choose to believe, the actual true state of the world is a matter of natural fact. It did or did not rain, regardless of whether the ground is wet; the rosary did or did not change, whether it looks gold.

Ideally, whether you believe in the claimer - your opinions about people - shouldn't affect what you believe about reality - the facts about the world. But of course, it does. This is the real problem with rare events, much less miracles: they're resistant to experiment, which is our normal way out of this dilemma.

Many skeptics argue we should completely exclude the possibility of the supernatural. That's not science, it's just atheism in a trench coat trying to sell you a bad idea. What is scientific, in the words of Newton, is excluding from our scientific hypotheses any causes not necessary or sufficient to explain phenomena.

A one-time event, such as my alleged phone call to my insurance agent today to talk about a policy for my new car, is strictly speaking not a subject for scientific explanation. To analyze the event, it must be in a class of phenomena open to experiments, such as cell phone calls made by me, or some such.

Otherwise, it's just a data point. An anecdote, an outlier. If you disbelieve me - if you check my cell phone records and argue it didn't happen - scientifically, that means nothing. Maybe I used someone else's phone because mine was out of charge. Maybe I misremembered a report of a very real event.

Your beliefs don't matter. I'll still get my insurance card in a couple of weeks.

So-called "supernatural" events, such as the alleged rosary transmutation, fall into this category. You can't experiment on them to resolve your personal bias, so you have to fall back on your trust for the claimer. But that trust is, in a sense, a personal judgment, not a scientific one.

Don't get me wrong: it's perfectly legitimate to exclude "supernatural" events from your scientific theories - I do, for example. We have to: following Newton, for science to work, we must first provide as few causes as possible, with as many far-reaching effects as possible, until experiment says otherwise.

But excluding rare events from our scientific view of the world forecloses the ability of observation to revise our theories. And excluding supernatural events from our broader view of the world is not a requirement of science, but a personal choice - a deliberate choice not to believe.

That may be right. That may be wrong. What happens, happens, and doesn't happen any other way. Whether that includes the possibility of rare events is a matter of natural fact, not personal choice; whether that includes the possibility of miracles is something you have to take on faith.

-the Centaur

Pictured: Allegedly, Thomas Bayes, though many have little faith in the claimants who say this is him.

The Soul is the Form of the Body

centaur 0
aquinas headshot

If you've ever gone to a funeral, watched a televangelist, or been buttonholed by a street preacher, you've probably heard Christianity is all about saving one's immortal soul - by believing in Jesus, accepting the Bible's true teaching on a social taboo, or going to the preacher's church of choice.

(Only the first of these actually works, by the way).

But what the heck is a soul? Most religious people seem convinced that we've got one, some ineffable spiritual thing that isn't destroyed when you die but lives on in the afterlife.  Many scientifically minded people have trouble believing in spirits and want to wash their hands of this whole soul idea.

Strangely enough, modern Christian theology doesn't rely too much on the idea of the soul. God exists, of course, and Jesus died for our sins, sending the Holy Spirit to aid us; as for what to do with that information, theology focuses less on what we are and more on what we should believe and do.

If you really dig into it, Christian theology gets almost existential, focusing on us as living beings, present here on the Earth, making decisions and taking consequences. Surprisingly, when we die, our souls don't go to heaven: instead, you're just dead, waiting for the Resurrection and the Final Judgement.

(About that, be not afraid: Jesus, Prince of Peace, is the Judge at the Final Judgment).

This model of Christianity doesn't exclude the idea of the soul, but it isn't really needed: When we die, our decision making stops, defining our relationship to God, which is why it's important to get it right in this life; when it's time for the Resurrection, God has the knowledge and budget to put us back together.

That's right: according to the standard interpretation of the Bible as recorded in the Nicene creed, we're waiting in joyful hope for a bodily resurrection, not souls transported to a purely spiritual Heaven. So if there's no need for a soul in this picture, is there any room for it? What is the idea of the soul good for?

Well, quite a lot, as it turns out.

The theology I'm describing should be familiar to many Episcopals, but it's more properly Catholic, and more specifically, "Thomistic", teachings based on the writings of Saint Thomas Aquinas, a thirteenth-century friar who was recognized - both now and then - as one of the greatest Christian philosophers.

Aquinas was a brilliant man who attempted to reconcile Aristotle's philosophy with Church doctrine. The synthesis he produced was penetratingly brilliant, surprisingly deep, and, at least in part, is documented in books which are packed in boxes in my garage. So, at best, I'm going to riff on Thomas here.

Ultimately, that's for the best. Aquinas's writings predate the scientific revolution, using a scholastic style of argument which by its nature cannot be conclusive, and built on a foundation of topics about the world and human will which have been superseded by scientific findings on physics and psychology.

But the early date of Aquinas's writings affects his theology as well. For example (riffing as best I can without the reference book I want), Aquinas was convinced that the rational human soul necessarily had to be immaterial because it could represent abstract ideas, which are not physical objects.

But now we're good at representing abstract ideas in physical objects. In fact, the history of the past century and a half of mathematics, logic, computation and AI can be viewed as abstracting human thought processes and making them reliable enough to implement in physical machines.

Look, guys - I am not, for one minute, going to get cocky about how much we've actually cracked of the human intellect, much less the soul. Some areas, like cognitive skills acquisition, we've done quite well at; others, like consciousness, are yielding to insights; others, like emotion, are dauntingly intractable.

But it's no longer a logical necessity to posit an intangible basis for the soul, even if practically it turns out to be true. But digging even deeper into Aquinas's notion of a rational soul helps us understand what it is - and why the decisions we make in this life are so important, and even the importance of grace.

The idea of a "form" in Thomistic philosophy doesn't mean shape: riffing again, it means function. The form of a hammer is not its head and handle, but that it can hammer. This is very similar to the modern notion of functionalism in artificial intelligence - the idea that minds are defined by their computations.

Aquinas believed human beings were distinguished from animals by their rational souls, which were a combination of intellect and will. "Intellect" in this context might be described in artificial intelligence terms as supporting a generative knowledge level: the ability to represent essentially arbitrary concepts.

Will, in contrast, is selecting an ideal model of yourself and attempting to guide your actions to follow it. This is a more sophisticated form of decision making than typically used in artificial intelligence; one might describe it as a reinforcement learning agent guided by a self-generated normative model.

What this means, in practice, is that the idea of believing in Jesus and choosing to follow Him isn't simply a good idea: it corresponds directly to the basic functions of the rational soul - intellect, forming an idea of Jesus as a (divinely) good role model, and attempting to follow in His footsteps in our choice of actions.

But the idea of the rational soul being the form of the body isn't just its instantaneous function at one point in time. God exists out of time - and all our thoughts and choices throughout our lives are visible to Him. Our souls are the sum of all of these - making the soul the form of the body over our entire lives.

This means the history of our choices live in God's memory, whether it's helping someone across the street, failing to forgive an irritating relative, going to confession, or taking communion. Even sacraments like baptism that supposedly "leave an indelible spiritual character on the soul" fit in this model.

This model puts the following Jesus, trying to do good and avoid evil, and partaking in sacraments in perspective. God knows what we sincerely believe in our hearts, whether we live up to it or not, and is willing to cut us slack through the mechanisms of worship and grace that add to our permanent record.

Whether souls have a spiritual nature or not - whether they come from the Guf, are joined to our bodies in life, and hang out in Hades after death awaiting reunion at the Resurrection, or whether they simply don't - their character is affected by what we believe, what we do, and how we worship here and now.

And that's why it's important to follow Jesus on this Earth, no matter what happens in the afterlife.

-the Centaur

Day 057

centaur 0
Turing Drawing Alan Turing, rendered over my own roughs using several layers of tracing paper. I started with the below rough, in which I tried to pay careful attention to the layout of the face - note the use of the 'third eye' for spacing and curved contour lines - and the relationship of the body, the shoulders and so on. Turing Rough 1 I then corrected that into the following drawing, trying to correct the position and angles of the eyes and mouth - since I knew from previous drawings that I tended to straighten things that were angled, I looked for those flaws and attempted to correct them. (Still screwed up the hair and some proportions). Turing Rough 2 This was close enough for me to get started on the rendering. In the end, I like how it came out, even though I flattened the curves of the hair and slightly squeezed the face and pointed the eyes slightly wrong, as you can see if you compare it to the following image from this New Yorker article: Turing Photo -the Centaur

Free Will and the Halting Problem

centaur 0
turing headshot

Lent is when Christians choose to give things up or to take things on to reflect upon the death of Jesus. For Lent, I took on this self-referential series about Lent, arguing Christianity is following Jesus, and that following role models are better than following rules because all sets of rules are ultimately incompete.

But how can we choose to follow Jesus? To many Christians, the answer is simple: "free will." At one Passion play (where I played Jesus, thanks to my long hair), the author put it this way: "You are always choose, because no-one can take your will away. You know that, don't you?"

Christians are highly attached to the idea of free will. However, I know a fair number of atheists and agnostics who seem attached to the idea of free will being a myth. I always find this bit of pseudoscence a bit surprising coming from scientifically minded folk, so it's worth asking the question.

Do we have free will, or not?

Well, it depends on what kind of free will we're talking about. Philosopher Daniel Dennett argues at book length that there are many definitions of "free will", only some varieties of which are worth having. I'm not going to use Dennett's breakdown of free will; I'll use mine, based on discussions with people who care.

The first kind of "free will" is undetermined will: the idea that "I", as consciousness or spirit, can make things happen, outside the control of physical law. Well, fine, if you want to believe that: the science of quantum mechanics allows that, since all observable events have unresolvable randomness.

But the science of quantum mechanics also suggests we could never prove that idea scientifically. To see why, look at entanglement: particles that are observed here are connected to particles over there. Say, if momentum is conserved, and two particles fly apart, if one goes left, the other must go right.

But each observed event is random. You can't predict one from the other; you can only extract it from the record by observing both particles and comparing the results. So if your soul is directing your body's choices, we could only tell by recording all the particles of your body and soul and comparing them.

Good luck with that.

The second kind of "free will" is instantaneous will: the idea that "I", at any instant of time, could have chosen to do something differently. It's unlikely we have this kind of free will. First, according to Einstein, simultaneity has no meaning for physically separated events - like the two hemispheres of your brain.

But, more importantly, the idea of an instant is just that - an idea. Humans are extended over time and space; the brain is fourteen hundred cubic centimeters of goo, making decisions over timescales ranging from a millisecond (a neuron fires) to a second and a half (something novel enters consciousness.)

But, even if you accept that we are physically and temporally extended beings, you may still cling to - or reject - an idea of free will: sovereign will, the idea that our decisions, while happening in our brains and bodies, are nevertheless our own. The evidence is fairly good that we have this kind of free will.

Our brains are physically isolated by our skulls and the blood-brain barrier. While we have reflexes, human decision making happens in the neocortex, which is largely decoupled from direct external responses. Even techniques like persuasion and hypnosis at best have weak, indirect effects.

But breaking our decision-making process down this way sometimes drives people away. It makes religious people cling to the hope of undetermined will; it makes scientific people erroneously think that we don't have free will at all, because our actions are not "ours", but are made by physical processes.

But arguing that "because my decisions are made by physical processes, therefore my decisions are not actually mine" requires the delicate dance of identifying yourself with those processes before the comma, then rejecting them afterwards. Either those decision making processes are part of you, or they are not.

If they're not, please go join the religious folks over in the circle marked "undetermined will."

If they are, then arguing that your decisions are not yours because they're made by ... um, the decision making part of you ... is a muddle of contradictions: a mix of equivocation (changing the meaning of terms) and a category error (mistaking your decision making as something separate from yourself).

But people committed to the non-existence of free will sometimes double down, claiming that even if we accept those decision making processes as part of us, our decisions are somehow not "ours" or not "free" because the outcome of our decision making process is still determined by physical laws.

To someone working on Markov decision processes - decision machines - this seems barely coherent.

The foundation of this idea is sometimes called Laplace's demon - the idea that a creature with perfect knowledge of all physical laws and particles and forces would be able to predict the entire history of the universe - and your decisions, so therefore, they're not your decisions, just the outcome of laws.

Too bad this is impossible. Not practically impossible - literally, mathematically impossible.

To see why, we need to understand the Halting Problem - the seemingly simple question of whether we can build a program to tell if any given computer program will halt given any particular input. As basic as this question sounds, Alan Turing proved in the 1930's that this is mathematically impossible.

The reason is simple: if you could build an analysis program which could solve this problem, you could feed itself to itself - wrapped in a loop that went forever if the original analysis program halts, and halts if it ran forever. No matter what answer it produces, it leads to a contradiction. The program won't work.

This idea seems abstract, but its implications are deep. It applies to not just computer programs, but to a broad class of physical systems in a broad class of universes. And it has corollaries, the most important being: you cannot predict what any arbitrary given algorithm will do without letting the algorithm do it.

If you could, you could use it to predict whether a program would halt, and therefore, you could solve the Halting Problem. That's why Laplace's Demon, as nice a thought experiment as it is, is slain by Turing's Machine. To predict what you would actually do, part of the demon would have to be identical to you.

Nothing else in the universe - nothing else in a broad class of universes - can predict your decisions. Your decisions are made in your own head, not anyone else's, and even though they may be determined by physical processes, the physical processes that determine them are you. Only you can do you.

So, you have sovereign will. Use it wisely.

-the Centaur

Pictured: Alan Turing, of course.

Jesus and Gödel

centaur 1
kurt godel

Yesterday I claimed that Christianity was following Jesus - looking at him as a role model for thinking, judging, and doing, stepping away from rules and towards principles, choosing good outcomes over bad ones and treating others like we wanted to be treated, and ultimately emulating what Jesus would do.

But it's an entirely fair question to ask, why do we need a role model to follow? Why not have a set of rules that guide our behavior, or develop good principles to live by? Well, it turns out it's impossible - not hard, but literally mathematically impossible - to have perfect rules, and principles do not guide actions. So a role model is the best tool we have to help us build the cognitive skill of doing the right thing.

Let's back up a bit. I want to talk about what rules are, and how they differ from principles and models.

In the jargon of my field, artificial intelligence, rules are if-then statements: if this, then do that. They map a range of propositions to a domain of outcomes, which might be actions, new propositions, or edits to our thoughts. There's a lot of evidence that the lower levels of operation of our minds is rule-like.

Principles, in contrast, are descriptions of situations. They don't prescribe what to do; they evaluate what has been done. The venerable artificial intelligence technique of generate-and-test - throw stuff on the wall to see what sticks - depends on "principles" to evaluate whether the outcomes are good.

Models are neither if-then rules nor principles. Models predict the evolution of a situation. Every time you play a computer game, a model predicts how the world will react to your actions. Every time you think to yourself, "I know what my friend would say in response to this", you're using a model.

Rules, of a sort, may underly our thinking, and some of our most important moral precepts are encoded in rules, like the Ten Commandments. But rules are fundamentally limited. No matter how attached you are to any given set of rules, eventually, those rules can fail you, and you can't know when.

The iron laws behind these fatal flaws are Gödel's incompleteness theorems. Back in the 1930's, Kurt Gödel showed any set of rules sophisticated enough to handle basic math would either fail to find things that were true, or would make mistakes - and, worse, could never prove that they were consistent.

Like so many seemingly abstract mathematical concepts, this has practical real-world implications. If you're dealing with anything at all complicated, and try to solve your problems with a set of rules, either those rules will fail to find the right answers, or will give the wrong answers, and you can't tell which.

That's why principles are better than rules: they make no pretensions of being a complete set of if-then rules that can handle all of arithmetic and their own job besides. They evaluate propositions, rather than generating them, they're not vulnerable to the incompleteness result in the same way.

How does this affect the moral teachings of religion? Well, think of it this way: God gave us the Ten Commandments (and much more) in the Old Testament, but these if-then rules needed to be elaborated and refined into a complete system. This was a cottage industry by the time Jesus came on the scene.

Breaking with the rule-based tradition, Jesus gave us principles, such as "love thy neighbor as thyself" and "forgive as you wish to be forgiven" which can be used to evaluate our actions. Sometimes, some thought is required to apply them, as in the case of "Is it lawful to do good or evil on the Sabbath?"

This is where principles fail: they don't generate actions, they merely evaluate them. Some other process needs to generate those actions. It could be a formal set of rules, but then we're back at square Gödel. It could be a random number generator, but an infinite set of monkeys will take forever to cross the street.

This is why Jesus's function as a role model - and the stories about Him in the Bible - are so important to Christianity. Humans generate mental models of other humans all the time. Once you've seen enough examples of someone's behavior, you can predict what they will do, and act and react accordingly.

The stories the Bible tells about Jesus facing moral questions, ethical challenges, physical suffering, and even temptation help us build a model of what Jesus would do. A good model of Jesus is more powerful than any rule and more useful than any principle: it is generative, easy to follow, and always applicable.

Even if you're not a Christian, this model of ethics can help you. No set of rules can be complete and consistent, or even fully checkable: rules lawyering is a dead end. Ethical growth requires moving beyond easy rules to broader principles which can be used to evaluate the outcomes of your choices.

But principles are not a guide to action. That's where role models come in: in a kind of imitation-based learning, they can help guide us by example until we've developed the cognitive skills to make good decisions automatically. Finding role models that you trust can help you grow, and not just morally.

Good role models can help you decide what to do in any situation. Not every question is relevant to the situations Jesus faced in ancient Galilee! For example, when faced with a conundrum, I sometimes ask three questions: "What would Jesus do? What would Richard Feynman do? What would Ayn Rand do?"

These role models seem far apart - Ayn Rand, in particular, tried to put herself on the opposite pole from Jesus. But each brings unique mental thought processes to the table - "Is this doing good or evil?" "You are the easiest person for yourself to fool" and "You cannot fake reality in any way whatsoever."

Jesus helps me focus on what choices are right. Feynman helps me challenge my assumptions and provides methods to test them. Rand is benevolent, but demands that we be honest about reality. If two or three of these role models agree on a course of action, it's probably a good choice.

Jesus was a real person in a distant part of history. We can only reach an understanding of who Jesus is and what He would do by reading the primary source materials about him - the Bible - and by analyses that help put these stories in context, like religious teachings, church tradition, and the use of reason.

But that can help us ask what Jesus would do. Learning the rules are important, and graduating beyond them to understand principles is even more important. But at the end of the day, we want to do the right thing, by following the lead of the man who asks, "Love thy neighbor as thyself."

-the Centaur

Pictured: Kurt Gödel, of course.

Renovation in Process

centaur 0
So you may have noticed the blog theme and settings changing recently; that's because I'm trying to get some kind of slider or visual image above the fold. I love the look of the blog with the big banner image, but I'm concerned that people just won't scroll down to see what's in the blog if there's nothing on the first page which says what I do. So I'll be experimenting. Stay tuned! -the Centaur Pictured: Yeah, this isn't the only renovation going on.

Robots in Montreal

centaur 1
A cool hotel in old Montreal.

"Robots in Montreal," eh? Sounds like the title of a Steven Moffat Doctor Who episode. But it's really ICRA 2019 - the IEEE Conference on Robotics and Automation, and, yes, there are quite a few robots!

Boston Dynamics quadruped robot with arm and another quadruped.

My team presented our work on evolutionary learning of rewards for deep reinforcement learning, AutoRL, on Monday. In an hour or so, I'll be giving a keynote on "Systematizing Robot Navigation with AutoRL":

Keynote: Dr. Anthony Francis
Systematizing Robot Navigation with AutoRL: Evolving Better Policies with Better Evaluation

Abstract: Rigorous scientific evaluation of robot control methods helps the field progress towards better solutions, but deploying methods on robots requires its own kind of rigor. A systematic approach to deployment can do more than just make robots safer, more reliable, and more debuggable; with appropriate machine learning support, it can also improve robot control algorithms themselves. In this talk, we describe our evolutionary reward learning framework AutoRL and our evaluation framework for navigation tasks, and show how improving evaluation of navigation systems can measurably improve the performance of both our evolutionary learner and the navigation policies that it produces. We hope that this starts a conversation about how robotic deployment and scientific advancement can become better mutually reinforcing partners.

Bio: Dr. Anthony G. Francis, Jr. is a Senior Software Engineer at Google Brain Robotics specializing in reinforcement learning for robot navigation. Previously, he worked on emotional long-term memory for robot pets at Georgia Tech's PEPE robot pet project, on models of human memory for information retrieval at Enkia Corporation, and on large-scale metadata search and 3D object visualization at Google. He earned his B.S. (1991), M.S. (1996) and Ph.D. (2000) in Computer Science from Georgia Tech, along with a Certificate in Cognitive Science (1999). He and his colleagues won the ICRA 2018 Best Paper Award for Service Robotics for their paper "PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning". He's the author of over a dozen peer-reviewed publications and is an inventor on over a half-dozen patents. He's published over a dozen short stories and four novels, including the EPIC eBook Award-winning Frost Moon; his popular writing on robotics includes articles in the books Star Trek Psychology and Westworld Psychology. as well as a Google AI blog article titled Maybe your computer just needs a hug. He lives in San Jose with his wife and cats, but his heart will always belong in Atlanta. You can find out more about his writing at his website.

Looks like I'm on in 15 minutes! Wish me luck.

-the Centaur

 

I<tab-complete> welcome our new robot overlords.

centaur 0
Hoisted from a recent email exchange with my friend Gordon Shippey:
Re: Whassap? Gordon: Sounds like a plan. (That was an actual GMail suggested response. Grumble-grumble AI takeover.) Anthony: I<tab-complete> welcome our new robot overlords.
I am constantly amazed by the new autocomplete. While, anecdotally, autocorrect of spell checking is getting worse and worse (I blame the nearly-universal phenomenon of U-shaped development, where a system trying to learn new generalizations gets worse before it gets better), I have written near-complete emails to friends and colleagues with Gmail's suggested responses, and when writing texts to my wife, it knows our shorthand! One way of doing this back in the day were Markov chain text models, where we learn predictions of what patterns are likely to follow each other; so if I write "love you too boo boo" to my wife enough times, it can predict "boo boo" will follow "love you too" and provide it as a completion. More modern systems use recurrent neural networks to learn richer sets of features with stateful information carried down the chain, enabling modern systems to capture subtler relationships and get better results, as described in the great article  "The Unreasonable Effectiveness of Recurrent Neural Networks". -the<tab-complete> Centaur  

PRM-RL Won a Best Paper Award at ICRA!

centaur 2
So, this happened! Our team's paper on "PRM-RL" - a way to teach robots to navigate their worlds which combines human-designed algorithms that use roadmaps with deep-learned algorithms to control the robot itself - won a best paper award at the ICRA robotics conference! I talked a little bit about how PRM-RL works in the post "Learning to Drive ... by Learning Where You Can Drive", so I won't go over the whole spiel here - but the basic idea is that we've gotten good at teaching robots to control themselves using a technique called deep reinforcement learning (the RL in PRM-RL) that trains them in simulation, but it's hard to extend this approach to long-range navigation problems in the real world; we overcome this barrier by using a more traditional robotic approach, probabilistic roadmaps (the PRM in PRM-RL), which build maps of where the robot can drive using point to point connections; we combine these maps with the robot simulator and, boom, we have a map of where the robot thinks it can successfully drive. We were cited not just for this technique, but for testing it extensively in simulation and on two different kinds of robots. I want to thank everyone on the team - especially Sandra Faust for her background in PRMs and for taking point on the idea (and doing all the quadrotor work with Lydia Tapia), for Oscar Ramirez and Marek Fiser for their work on our reinforcement learning framework and simulator, for Kenneth Oslund for his heroic last-minute push to collect the indoor robot navigation data, and to our manager James for his guidance, contributions to the paper and support of our navigation work. Woohoo! Thanks again everyone! -the Centaur

Dave, We’re On Your Side

centaur 0
The biggest "current" in my mind is the person I am currently worried about, my good friend and great Game AI developer Dave Mark. Dave is the founder of the GDC AI Summit ... but was struck by a car leaving the last sessions at GDC, and still is in the hospital, seriously injured. Dave is a really special person. I've been going to GDC longer than Dave, but it was he (along with my friend Neil Kirby) who drew me out of my shell and got me to participate in the Game AI community, which is a super important part of my life even though I don't do Game AI for my day job. Dave's friends and family have set up a Go Fund Me to help cover his medical expenses and the travel and other expenses of his family while he remains in the hospital in the Bay Area. I encourage you all to help out - especially if you've ever played a game and found the AI especially clever. Dave, you're in our prayers ... -the Centaur Pictured: Dave (on the right) and friends.

Just Checking in on the Currents

centaur 0
SO! Hey! GDC and Clockwork Alchemy are over and I'm not dead! (A joke which actually I don't find that funny given the circumstances, which I'll dig into in just a moment). Strangely enough, hitting two back-to-back conferences, both of which you participate super heavily in, can take something out of your blog. Who knew? But I need to get better at blogging, so I thought I'd try something new: a "check-in" in which I try to hit all the same points each time - what am I currently writing, editing, programming, etc? For example, I am currently:
  • Listening To: Tomb Raider soundtrack (the original).
  • Reading: Theoretical Neuroscience (book).
  • Writing: "Death is a Game for the Young", a novella in the Jeremiah Willstone multiverse.
  • Editing: SPECTRAL IRON, Dakota Frost #4.
  • Reviewing: SHATTERED SKY, Lunar Cycle #2 by David Colby.
  • Researching: Neural Approaches to Universal Subgoaling.
  • Programming: A toy DQN (Deep Q Network) to stretch my knowledge.
  • Drawing: Steampunk girls with goggles.
  • Planning: Camp Nanowrimo for April, ROOT USER, Cinnamon Frost #3.
  • Taking on: Giving up alcohol for Lent.
  • Dragging on: Doing my taxes.
  • Spring Cleaning: The side office.
  • Trying to Ignore: The huge pile of blogposts left over from GDC and CA.
  • Caring For: My cat Lenora, suffering from cancer.
  • Waiting For: My wife Sandi, returning from a business trip.
Whew, that's a lot, and I don't even think I got them all. Maybe I won't try to write all of the same "currents" every time, but it was a useful exercise in "find something to blog about without immediately turning it into a huge project." But the biggest "current" in my mind is the person I am currently worried about, my good friend and great Game AI developer Dave Mark. Dave is the founder of the GDC AI Summit ... but was struck by a car leaving the last sessions at GDC, and still is in the hospital, seriously injured. More in a moment. -the Centaur Pictured: Butterysmooooth sashimi at Izakaya Ginji in San Mateo from a few days ago, along with my "Currently Reading" book Theoretical Neuroscience open to the Linear Algebra appendix, when I was "Currently Researching" some technical details of the vector notation of quadratic forms by going through stacks and stacks of books, a question which would have been answered more easily if I had started by looking at the entry for quadratic forms in Wolfram's MathWorld, had I only known at the start of my search that that was the name for math terms like xWx.

Enter Colaboratory (AKA “A Spoonful of the Tracking Soup”)

centaur 0
As an author, I'm interested in how well my books are doing: not only do I want people reading them, I also want to compare what my publisher and booksellers claim about my books with my actual sales. (Also, I want to know how close to retirement I am.) In the past, I used to read a bunch of web pages on Amazon (and Barnes and Noble too, before they changed their format) and entered them into an Excel spreadsheet called "Writing Popularity" (but just as easily could have been called "Writing Obscurity", yuk yuk yuk). That was fine when I had one book, but now I have four novels and an anthology out. This could take out half an a hour or more, which I needed for valuable writing time. I needed a better system. I knew about tools for parsing web pages, like the parsing library Beautiful Soup, but it had been half a decade since I touched that library and I just never had the time to sit down and do it. But, recently, I've realized the value of a great force multiplier for exploratory software development (and I don't mean Stack Exchange): interactive programming notebooks. Pioneered by Mathematica in 1988 and picked up by tools like iPython and its descendent Jupyter, an interactive programming notebook is like a mix of a command line - where you can dynamically enter commands and get answers - and literate programming, where code is written into the documents that document (and produce it). But Mathematica isn't the best tool for either web parsing or for producing code that will one day become a library - it's written in the Wolfram Language, which is optimized for mathematical computations - and Jupyter notebooks require setting up a Jupyter server or otherwise jumping through hoops. Enter Google's Colaboratory. Colab is a free service provided by Google that hosts Jupyter notebooks. It's got most of the standard libraries that you might need, it provides its own backends to run the code, and it saves copies of the notebooks to Google Drive, so you don't have to worry about acquiring software or running a server or even saving your data (but do please hit save). Because you can try code out and see the results right away, it's perfect on iterating ideas: no need to re-start a changed program, losing valuable seconds; if something doesn't work, you can tweak the code and try it right away. In this sense Colab has some of the force multiplier effects of a debugger, but it's far more powerful. Heck, in this version of the system you can ask a question on Stack Overflow right from the Help menu. How cool is that? My prototyping session got a bit long, so rather than try to insert it inline here, I wrote this blog post in Colab! To read more, go take a look at the Colaboratory notebook itself, "A Sip of the Tracking Soup", available at: https://goo.gl/Mihf1n -the Centaur  

Why I’m Solving Puzzles Right Now

centaur 0
When I was a kid (well, a teenager) I'd read puzzle books for pure enjoyment. I'd gotten started with Martin Gardner's mathematical recreation books, but the ones I really liked were Raymond Smullyan's books of logic puzzles. I'd go to Wendy's on my lunch break at Francis Produce, with a little notepad and a book, and chew my way through a few puzzles. I'll admit I often skipped ahead if they got too hard, but I did my best most of the time. I read more of these as an adult, moving back to the Martin Gardner books. But sometime, about twenty-five years ago (when I was in the thick of grad school) my reading needs completely overwhelmed my reading ability. I'd always carried huge stacks of books home from the library, never finishing all of them, frequently paying late fees, but there was one book in particular - The Emotions by Nico Frijda - which I finished but never followed up on. Over the intervening years, I did finish books, but read most of them scattershot, picking up what I needed for my creative writing or scientific research. Eventually I started using the tiny little notetabs you see in some books to mark the stuff that I'd written, a "levels of processing" trick to ensure that I was mindfully reading what I wrote. A few years ago, I admitted that wasn't enough, and consciously  began trying to read ahead of what I needed to for work. I chewed through C++ manuals and planning books and was always rewarded a few months later when I'd already read what I needed to to solve my problems. I began focusing on fewer books in depth, finishing more books than I had in years. Even that wasn't enough, and I began - at last - the re-reading project I'd hoped to do with The Emotions. Recently I did that with Dedekind's Essays on the Theory of Numbers, but now I'm doing it with the Deep Learning. But some of that math is frickin' beyond where I am now, man. Maybe one day I'll get it, but sometimes I've spent weeks tackling a problem I just couldn't get. Enter puzzles. As it turns out, it's really useful for a scientist to also be a science fiction writer who writes stories about a teenaged mathematical genius! I've had to simulate Cinnamon Frost's staggering intellect for the purpose of writing the Dakota Frost stories, but the further I go, the more I want her to be doing real math. How did I get into math? Puzzles! So I gave her puzzles. And I decided to return to my old puzzle books, some of the ones I got later but never fully finished, and to give them the deep reading treatment. It's going much slower than I like - I find myself falling victim to the "rule of threes" (you can do a third of what you want to do, often in three times as much time as you expect) - but then I noticed something interesting. Some of Smullyan's books in particular are thinly disguised math books. In some parts, they're even the same math I have to tackle in my own work. But unlike the other books, these problems are designed to be solved, rather than a reflection of some chunk of reality which may be stubborn; and unlike the other books, these have solutions along with each problem. So, I've been solving puzzles ... with careful note of how I have been failing to solve puzzles. I've hinted at this before, but understanding how you, personally, usually fail is a powerful technique for debugging your own stuck points. I get sloppy, I drop terms from equations, I misunderstand conditions, I overcomplicate solutions, I grind against problems where I should ask for help, I rabbithole on analytical exploration, and I always underestimate the time it will take for me to make the most basic progress. Know your weaknesses. Then you can work those weak mental muscles, or work around them to build complementary strengths - the way Richard Feynman would always check over an equation when he was done, looking for those places where he had flipped a sign. Back to work! -the Centaur Pictured: my "stack" at a typical lunch. I'll usually get to one out of three of the things I bring for myself to do. Never can predict which one though.

Nailed It (Sorta)

centaur 0
Here's what was in the rabbit hole from last time (I had been almost there): I had way too much data to exploit, so I started to think about culling it out, using the length of the "mumbers" to cut off all the items too big to care about. That led to the key missing insight: my method of mapping mumbers mapped the first digit of each item to the same position - that is, 9, 90, 900, 9000 all had the same angle, just further out. This distance was already a logarithm of the number, but once I dropped my resistance to taking the logarithm twice... ... then I could create a transition plot function which worked for almost any mumber in the sets of mumbers I was playing with ... Then I could easily visualize the small set of transitions - "mumbers" with 3 digits - that yielded the graph above; for reference these are: The actual samples I wanted to play with were larger, like this up to 4 digits: This yields a still visible graph: And this, while it doesn't let me visualize the whole space that I wanted, does provide the insight I wanted. The "mumbers" up to 10000 do indeed "produce" most of the space of the smaller "mumbers" (not surprising, as the "mumber" rule 2XYZ produces XYZ, and 52XY produces XYXY ... meaning most numbers in the first 10,000 will be produced by one in that first set). But this shows that sequences of 52 rule transitions on the left produce a few very, very large mumbers - probably because 552552 produces 552552552552 which produces 552552552552552552552552552552552552 which quickly zooms away to the "mumberOverflow" value at the top of my chart. And now the next lesson: finishing up this insight, which more or less closes out what I wanted to explore here, took 45 minutes. I had 15 allotted to do various computer tasks before leaving Aqui, and I'm already 30 minutes over that ... which suggests again that you be careful going down rabbit holes; unlike leprechaun trails, there isn't likely to be a pot of gold down there, and who knows how far down it can go? -the Centaur P.S. I am not suggesting this time spent was not worthwhile; I'm just trying to understand the option cost of various different problem solving strategies so I can become more efficient.

Don’t Fall Into Rabbit Holes

centaur 2
SO! There I was, trying to solve the mysteries of the universe, learn about deep learning, and teach myself enough puzzle logic to create credible puzzles for the Cinnamon Frost books, and I find myself debugging the fine details of a visualization system I've developed in Mathematica to analyze the distribution of problems in an odd middle chapter of Raymond Smullyan's The Lady or the Tiger. I meant well! Really I did. I was going to write a post about how finding a solution is just a little bit harder than you normally think, and how insight sometimes comes after letting things sit. But the tools I was creating didn't do what I wanted, so I went deeper and deeper down the rabbit hole trying to visualize them. The short answer seems to be that there's no "there" there and that further pursuit of this sub-problem will take me further and further away from the real problem: writing great puzzles! I learned a lot - about numbers, about how things could combinatorially explode, about Ulam Spirals and how to code them algorithmically. I even learned something about how I, particularly, fail in these cases. But it didn't provide the insights I wanted. Feynman warned about this: he called it "the computer disease", worrying about the formatting of the printout so much you forget about the answer you're trying to produce, and it can strike anyone in my line of work. Back to that work. -the Centaur