More sketches from Wizard - How To Draw: Basic Training. I was curious about what happened to Wizard, and it apparently imploded with the big move to the Internet - just like many Internet publications imploded with the move to regurgitated garbage hidden behind sociopathic paywalls. But I'm not bitter.
Super quick sketch of Cinnamon, as I was in food coma after Easter dinner, then had to write a long review for a journal - which I was already a day late on.
And, counting "a day late" as "missed a thing", I once again "missed a thing" because I was in a meeting which we decided to let run long. Which made the next meeting run long, and we extended it even longer. And because there wasn't a specific thing on my calendar for Saturday evening - it was just on my list of things to do in my todos - I said, "eh, let's let this go long and get this done."
And then something else didn't get done.
I've learned to watch out for this zealously, because for me, at least, going long on a meeting is a dangerous prescription for screwing up your next task. If you think you can go a bit longer ... what are you missing?
When you've got a lot to do, sometimes it's tempting to just "power through it" - for example, by extending a meeting time until all the agenda items are handled. But this is just another instance of what's called "hero programming" in the software world, and while sometimes it's necessary (say, the day of a launch) it isn't a sustainable long-term strategy, and will incur debts that you can't easily repay.
Case in point, for the Neurodiversiverse Anthology, my coeditor and I burned up our normally scheduled meeting discussing, um, scheduling with the broader Thinking Ink team, so we added a spot meeting to catch up. We finalized the author and artist contracts, we developed guidance for the acceptance and rejection letters, and did a whole bunch of other things. It felt very productive.
But, all in all, a one hour meeting became three and a half, and I ended up missing two scheduled meetings because of that. The meetings hadn't yet landed on the calendar - one because we were still discussing it via email, and the other because it was a standing meeting out of my control. But because our three and a half hour meeting extended over the time we were supposed to follow up and set the actual meeting time, we never set that time, and when I was playing catch up later that evening, I literally spaced on what day of the week it was, and didn't notice the other meeting had started until it was over.
All that's on me, of course - it's important to put stuff on the calendar as soon as possible, including standing meetings, even if the invite is only for you, and I have no-one else to blame for that broken link in the chain. And both I and my co-editor agreed to (and wanted to) keep "powering through it" so we didn't have to schedule a Saturday meeting. But, I wonder: did my co-editor also have cascading side effects due to this longer meeting? How was her schedule impacted by this?
Overall, this is an anthology, and book publishing has long and unexpectedly complex and tight schedules: if we don't push to get the editing done ASAP, we'll miss our August publishing window. But it's worth remembering that we need to be kind to ourselves and realistic about our capabilities, or we'll burn out and still miss our window.
That happened to me once in grad school - on what I recall was my first trip to the Bay Area, in fact. I hadn't gotten as much done on my previous internship, and started trying to "power through it" to get a lot done from the very first week, putting in super long hours. I started to burn out the very first weekend - I couldn't keep the pace. Nevertheless, I kept trying to push, and even took on new projects, like the first draft of the proposal for the Personal Pet (PEPE) robotic assistant project.
In one sense, that all worked out: my internship turned into a love of the Bay Area, where I lived for ~16 years of my life; the PEPE project led to another internship in Japan, to co-founding Enkia, to a job at Google, and ultimately to my new career in robotics.
But, in another sense, it didn't: I got RSI from a combination of typing every day for work, typing every night for the proposal, and blowing off steam from playing video games when done. I couldn't type for almost nine months, during the writing of my PhD thesis, which I could not stop at, and had to learn to write with my left hand. I was VERY lucky: I know some other people in grad school with permanent wrist damage.
"Powering through it" isn't sustainable, and while it can lead to short-term gains and open long-term doors, can lead to short-term gaffes and long-term (or even permanent) injuries. That's why it's super important to figure out how to succeed at what you're doing by working at a sustainable pace, so you can conserve your "powering through it" resources for the times when you're really in the clinch.
Because if you don't save your resources for when you need them, you can burn yourself out along the way, and still fail despite your hard work - perhaps walking away with a disability as a consolation prize.
-the Centaur
Pictured: Powering through taking a photograph doesn't work that well, does it?
Don't you hate it when you think of something clever to say, but forget to write it down? I do. My wife and I were having a discussion and I came up with some very clever statement of the form "if people do this, they don't end up doing that", but now I can't remember it, so please enjoy this picture of a cat sending an email.
Just a moment. Just a moment.
"If you haven't climbed a mountain before, thinking about what you'll do when you get there is a distraction from starting the journey towards it. Climbing a mountain seems hard, but they're only a few miles high, and perhaps ten times that wide; most of your journey towards it will be on the plain, and that deceptively level terrain is the hardest part. Speculating about what parka to wear on the upper slopes does nothing to get you walking towards that slope; set out on your journey, and you can buy a parka when you're closer."
This bit of armchair wisdom was designed to encapsulate why it's better to start work on your business than it is to speculate on how to grow it into a multibillion-dollar conglomerate. Sure, it's great to have a grand vision, but you don't need to worry about mergers and acquisitions before you've found any customers - if you've never built a business before, that is.
If you are someone who has built many businesses, it's okay to build on your experience to guide your steps - but most of us have not, and our grand dreams can actively get in the way of figuring out how to make your product, how to get it in front of your customers, and how to make your product excel in their eyes so that they choose you over the alternatives.
Phew. Strangely enough, that first image was load-bearing: I picked a "random" recent picture for this blog, but it so turned out that our cat had been playing with his catnip laptop right around the time that Sandi and I had been discussing strategies for startups.
Feed your memory with enough cues, sometimes you get a retrieval.
Cogsci out.
-the Centaur
Pictured: Loki, sending emails on his catnip laptop, and resting on his laurels after a hard day at work.
No, this isn't a post about family, though it could easily be adapted to that topic. Nor is it a post about generic togetherness - that's why I said "each other" instead in the title. No, this is a post about how we're often stronger when we take advantage of the strengths of those around us.
Often at work we have our own perspective, and it can be easy to get caught up in making sure that our way is the way that's chosen, and our work is the work that is credited. But if we do, we may miss out on great suggestions from our coworkers, or on the opportunity to benefit from the work of others.
Just today at one of my contracting jobs, I had to present our work on the project so far. While most of the machine learning work on the project was mine, a lot of the foundational analysis on the data was done by one of my coworkers - and I called him out specifically when presenting his graphs.
Then, we came to the realization that collecting the amount of data we would ideally like to have to learn on would literally cost millions of dollars. I presented a few ways out of this dilemma - but then, one of our senior engineers spoke up, trying to brainstorm a simpler solution to the problem.
I'd been hoping that he would speak up - he had shown deep insight earlier in the project, and now, after a few minutes of brainstorming, he came up with a key idea which might enable us to use the software I've already written with the data we've already collected, saving us both time and money.
Afterwards, the coworker whose contributions I'd called out during the meeting hung on the call, trying to sketch out with me how to implement the ideas the senior engineer had contributed. Then, unprompted, he spent an hour or so sending me a sketch of an implementation and a few sample data files.
We got much farther working together and recognizing each others' contributions than we ever would have had we all been coming to the table just with what we brought on our own.
-the Centaur
Pictured: friends and family gathering over the holidays.
tl;dr: get to the point in the first line in your emails, and also in the subject.
"TL;DR" is an acronym meaning "Too Long; Didn't Read" which is used to introduce a quick summary of a longer document - as I did in the first line of this email.
Often when writing an email we are working out our own thoughts of what should be communicated or should happen - which means that the important point usually comes at the end.
But people don't often read to the end. So it's important, when you get to the end of your email, to port the most important point up to the top (which I typically do with the TL;DR tag).
And, even better, if you can put it in the subject line, do that too.
Your email is more likely to work that way.
-the Centaur
Pictured: our wedding dragon lamp, sitting on a side table with our wedding DVD, which is sort of a coincidence; and a very cool light bulb.
Discussed: a topic I swear I've written about in this blog, but I cannot find via searching past posts.
SO! After 17 years at the Google, my last day - finally, my actual last day - was yesterday, March 31st, 2023. They cut off my access January 20th, but out of respect for their employees (and the media, and the law) they gave us a generous +60 day notice period, which ran out yesterday.
I don't regret the time I spent at Google - well, at least not most of it. I learned so much and made so many friends and did so many things - and, frankly speaking, the pay, food and healthcare were quite good. On the one hand, I do think I probably should have taken that job as director of search at a startup back in ~2010; it would have forced me to grow and challenged my assumptions and given me a lot of leadership experience which would have helped my career. But, if I'd done that, I wouldn't have transitioned over to robotics, which is now my principal career; so perhaps it's good I didn't pull on the thread of that tapestry.
But I do regret not being able to code on my own. Virtually everything I could have worked on was technically owned by Google, and if I wanted to open source it, I would need to submit it for invention review - with the chance that they would say no. For a while, you couldn't even work on a game at all if you worked at Google, as Google saw this as a threat to their business model of, ya know, not making games; eventually they realized that was silly, but still, I couldn't take the risk of pouring my heart into something that then Google would claim ownership of.
So no code for you. Or me either.
I know people who built successful businesses as side hustles. While that's efficient, it isn't effective: it leaves you vulnerable to being sued by your employer, or fired by your employer, or both. You can do it, of course, but you're reducing your chance of success in exchange for speed; whereas I like to maximize the chance of success - which requires speed, of course, but not so much you're taking on unnecessary risk. So, for maximum cleanliness, it's best to do things fresh from first principles after you leave.
Which is what I'm going to do now. I don't precisely know what I am going to do, but I do think one useful exercise would be to download all the social navigation benchmarks I've been researching for the Principles and Guidelines benchmark paper, and see how they work and what they can do. Some of the software has ... ahem ... gone stale, but this will be a good exercise for me to test my debugging chops, honed at Google, on external software outside of the "Google3" environment.
Wish me luck!
-the Centaur
Pictured: Fulfilling a missing install for the package gym-collision-avoidance; given that I'd done a lot of command line development recently for a Stanford class, I think the issue here might have been some missing setup step when I moved to my new laptop, as I'm sure this would have come up before.
Recently, collaborating on a paper, I was convinced that there was a problem in the algorithm we were presenting, and got together with a colleague to discuss it. He saw some of the problems, but had a different take on others, and kept coming back to a minor point about our use of a method in one step.
As we talked, we slowly realized the problem I was raising and the comment he was making, while seemingly unrelated, were actually two sides of the same coin. A minor tweak in the use of a published algorithm, seemingly made just out of necessity to make a demo work, was actually a key, load-bearing innovation that made everything downstream in the algorithm work.
We made the change, and suddenly everything in the paper started to fall into place.
But we'd never have gotten there if we hadn't taken the time to listen to each other.
-the Centaur
Pictured: Nola, the night of another great conversation with a friend.
Because he took Twitter private. Look, I'm not against private companies per se: I'm part of one (Thinking Ink Press) and have started another (Logical Robotics). And I'm not against Elon Musk per se either: I have some criticisms of how he's running Twitter, but those criticisms are not material to my point, and, hey, he has made me a great deal of money over the years as a Tesla and Twitter shareholder, so, perhaps he knows what he's doing in this case (though, based on how it's going, I seriously doubt it.)
No, my issue is, it's not a public company anymore. I strongly believe most large companies should be public, and that I would not work for a large private corporation if I could possibly help it. Private corporations exist to serve their shareholders; public corporations exist to serve the public. We structure them for the benefit of shareholders to encourage people to create companies and improve the economy, but going public places the company under increased oversight to ensure it is serving the public interest.
Public corporations place structure between the shareholders and the business: shareholders elect a board, which selects a CEO, who selects the employees of the company and directs its business. So at a public corporation, both the lowliest employee and the CEO work for the company, not the shareholders.
This insulation creates a great equalizer. In the end, everyone at the company, from the CEO to the mail room temp, are all responsible for serving the company. At a public company, you don't work for your manager; you both work for the company, and you both should act in its best interests.
At a healthy company - a public company - you have the moral right to say, "No, sir, that doesn't work that way," or "No, ma'am, I won't do that; that's harmful to the company." Admittedly, this can get you fired, but you still have the moral right to do it.
At Twitter, however, it's Elon's show. And he has the right to run it the way that he wants - he certainly paid enough for it. So, if I worked at Twitter ... I think I would have to have taken the severance, if offered, because while I will work for a public company, I won't work in a feudal kingdom.
The King can boost his own tweets.
-the Centaur
Pictured: More graffiti, from an undisclosed location.
Another neat little place in downtown Palo Alto. It's amazing how special a place downtown Palo Alto is; for being a part of a vast megalopolis, it's a charming downtown with a small-town feel and a surprisingly connected place. I ran into at least six people I knew in the short time I was down there tonight, and got an introduction to a robotics group at MIT just by sitting in a chair and talking to some friends.
This is kind of the experience I had when I first came out to the Bay as an intern, 25 years ago (more or less); I had just arrived, was hungry, but restaurants were busy, so I took a seat at a restaurant bar, the only space available ... but no sooner had I sat down than I got offered a job.
Well, technically, I sat down and cracked open a very technical book, and the person sitting next to me didn't offer me a job, but did give me their card and let me know their startup was hiring.
Congratulations, Sir Richard Branson, on your successful space flight! (Yes, yes, I *know* it's technically just upper atmosphere, I *know* there's no path to orbit (yet) but can we give the man some credit for an awesome achievement?) And I look forward to Jeff Bezos making a similar flight later this month.
Now, I stand by my earlier statement: the way you guys are doing this, a race, is going to get someone killed, perhaps one of you guys. A rocketship is not a racecar, and moves into realms of physics where we do not have good human intuition. Please, all y'all, take it easy, and get it right.
That being said, congratulations on being the first human being to put themselves into space as part of a rocket program that they themselves set in motion. That's an amazing achievement, no-one can ever take that away from you, and maybe that's why you look so damn happy. Enjoy it!
-the Centaur
P.S. And day 198, though I'll do an analysis of the drawing at a later time.
You know, Jeff Bezos isn’t likely to die when he flies July 20th. And Richard Branson isn’t likely to die when he takes off at 9am July 11th (tomorrow morning, as I write this). But the irresponsible race these fools have placed them in will eventually get somebody killed, as surely as Elon Musk’s attempt to build self-driving cars with cameras rather than lidar was doomed to (a) kill someone and (b) fail. It’s just, this time, I want to be caught on record saying I think this is hugely dangerous, rather than grumbling about it to my machine learning brethren.
Whether or not a spacecraft is ready to launch is not a matter of will; it’s a matter of natural fact. This is actually the same as many other business ventures: whether we’re deciding to create a multibillion-dollar battery factory or simply open a Starbucks, our determination to make it succeed has far less to do with its success than the realities of the market—and its physical situation. Either the market is there to support it, and the machinery will work, or it won’t.
But with normal business ventures, we’ve got a lot of intuition, and a lot of cushion. Even if you aren’t Elon Musk, you kind of instinctively know that you can’t build a battery factory before your engineering team has decided what kind of battery you need to build, and even if your factory goes bust, you can re-sell the land or the building. Even if you aren't Howard Schultz, you instinctively know it's smarter to build a Starbucks on a busy corner rather than the middle of nowhere, and even if your Starbucks goes under, it won't explode and take you out with it.
But if your rocket explodes, you can't re-sell the broken parts, and it might very well take you out with it. Our intuitions do not serve us well when building rockets or airships, because they're not simple things operating in human-scaled regions of physics, and we don't have a lot of cushion with rockets or self-driving cars, because they're machinery that can kill you, even if you've convinced yourself otherwise.
The reasons behind the likelihood of failure are manyfold here, and worth digging into in greater depth; but briefly, they include:
The Paradox of the Director's Foot, where a leader's authority over safety personnel - and their personal willingness to take on risk - ends up short-circuiting safety protocols and causing accidents. This actually happened to me personally when two directors in a row had a robot run over their foot at a demonstration, and my eagle-eyed manager recognized that both of them had stepped into the safety enclosure to question the demonstrating engineer, forcing the safety engineer to take over audience questions - and all three took their eyes off the robot. Shoe leather degradation then ensued, for both directors. (And for me too, as I recall).
The Inexpensive Magnesium Coffin, where a leader's aesthetic desire to have a feature - like Steve Job's desire for a magnesium case on the NeXT machines - led them to ignore feedback from engineers that the case would be much more expensive. Steve overrode his engineers ... and made the NeXT more expensive, just like they said it would, because wanting the case didn't make it cheaper. That extra cost led to the product's demise - that's why I call it a coffin. Elon Musk's insistence on using cameras rather than lidar on his self-driving cars is another Magnesium Coffin - an instance of ego and aesthetics overcoming engineering and common sense, which has already led to real deaths. I work in this precise area - teaching robots to navigate with lidar and vision - and vision-only navigation is just not going to work in the near term. (Deploy lidar and vision, and you can drop lidar within the decade with the ground-truth data you gather; try going vision alone, and you're adding another decade).
Egotistical Idiot's Relay Race (AKA Lord Thomson's Suicide by Airship). Finally, the biggest reason for failure is the egotistical idiot's relay race. I wanted to come up with some nice, catchy parable name to describe why the Challenger astronauts died, or why the USS Macon crashed, but the best example is a slightly older one, the R101 disaster, which is notable because the man who started the R101 airship program - Lord Thomson - also rushed the program so he could make a PR trip to India, with the consequence that the airship was certified for flight without completing its endurance and speed trials. As a result, on that trip to India - its first long distance flight - the R101 crashed, killing 48 of the 54 passengers - Lord Thomson included. Just to be crystal clear here, it's Richard Branson who moved up his schedule to beat Jeff Bezos' announced flight, so it's Sir Richard Branson who is most likely up for a Lord Thomson's Suicide Award.
I don't know if Richard Branson is going to die on his planned spaceflight tomorrow, and I don't know that Jeff Bezos is going to die on his planned flight on the 20th. I do know that both are in an Egotistical Idiot's Relay Race for even trying, and the fact that they're willing to go up themselves, rather than sending test pilots, safety engineers or paying customers, makes the problem worse, as they're vulnerable to the Paradox of the Director's Foot; and with all due respect to my entire dot-com tech-bro industry, I'd be willing to bet the way they're trying to go to space is an oversized Inexpensive Magnesium Coffin.
-the Centaur
P.S. On the other hand, when Space X opens for consumer flights, I'll happily step into one, as Musk and his team seem to be doing everything more or less right there, as opposed to Branson and Bezos.
P.P.S. Pictured: Allegedly, Jeff Bezos, quick Sharpie sketch with a little Photoshop post-processing.
... came up as my wife and I were discussing the "creative hangers-on form" of Stigler's Law. The original Stigler's Law, discovered by Roger Merton and popularized by Stephen Stigler, is the idea that in science, no discovery is named after its original discoverer.
In creative circles, it comes up when someone who had little or nothing to do with a creative process takes credit for it. A few of my wife's friends were like this, dropping by to visit her while she was in the middle of a creative project, describing out loud what she was doing, then claiming, "I told her to do that."
In the words of Finn from The Rise of Skywalker: "You did not!"
In computing circles, the old joke referred to the Java programming language. I've heard several variants, but the distilled version is "He thinks he invented Java because he was in the room when someone made coffee." Apparently this is a good description of how Java itself was named, down to at least one person claiming they came up with the name Java and others disputing that, even suggesting that they opposed it, claiming instead that someone else in the room was responsible - while that person in turn rejected the idea, noting only that there was some coffee in the room from Peet's.
What, did you think I was not going to do Drawing Every Day just because I did a Photoshop graphic for the Lent entry?
So, today's exercise was something very difficult for me: abandoning a failed rough and starting over.
You see, many artists that I know will get sucked into perfecting a drawing that has some core flaw in its bones - this is something I ran into with my Batman cover page. I know one artist who has worked over a handful of difficult paintings for literally 2-3 years ... but who can produce dozens of new paintings for a show on the drop of a hat. But it's hard emotionally to let go the investment in a partially finished piece.
This is tied up with the Sunken Cost Fallacy Fallacy, the false idea that if you've decided a venture has failed you should cut your losses despite your prior investment in it. This is based on the very real ideas of sunk costs - costs expended that cannot be recovered - which should not be factored into rational decisions the same way that we should prospective costs - costs that can be avoided by taking action. The "Sunken Cost Fallacy" comes in when people don't cut their losses in a failed venture.
The "Fallacy Fallacy" part kicks in because in the real world costs do not become sunk as a result of your decisions. When a self-proclaimed "decider"(1) chooses to proclaim that a project is a failure, the value invested in the project doesn't magically become nonrecoverable based on that decision and the classical Sunken Cost Fallacy does not apply. I have seen a private company literally throw away a two million dollar investment for a dollar because the owner didn't want to deal with it anymore.
Fortunately, most artists are better businessmen than that. Deep down, they know any painting could be the ONE that gets them seen; deeper down, each painting is an expression of their creativity. Even if the painting has flaws, one never knows whether the piece will be fixable, even ultimately excel. I have seen paintings go through years of work and many difficulties, only to finally turn up amazing. Drawings, paintings and novels are like investments in that way, always tantalizing us with their future potential.
But, deep down, I feel like it's possible to do better than that. That by painting or drawing more, and being more ruthless earlier in the process, that it's possible to recognize wrong turns and truly sunken costs and to start over. Once a huge canvas has covered with paint over many months, or a large manuscript has been filled with words over an equal period of time, it represents an investment in images and ideas that can potentially be salvaged ... but a sketch or outline, now, that you can throw out straightaway.
You may not get the thirty minutes doing the sketch back, but at least you'll be starting in a better place.
In my case, I was starting here, the cover for Steampunk Gear, Gadgets and Gizmos I had lying about:
I started what I intended to be a quick sketch, and got partway into the roughs ...
... when I decided that the shape of the face was off - and the proportions of the arm were even further off. I started to fix it - you can see a few doubled features like eyes and lips in there - but I decided - ha, decided - no, stop, STOP Anthony, this rough is too far gone.
Start over, and look more closely at what you see this time.
That led to the drawing at the top of the entry. There were still problems with the finished piece - I am continuing to have trouble with tilting heads the wrong way, and something went wrong with the shape of the arm, leading to a too-narrow, too-long wrist - but the bones of the sketch were so much better than the first attempt that it was easy to finish the drawing.
And thus, keep up drawing every day.
-the Centaur
(1) I'm not bitter.
So, this happened! Our team's paper on "PRM-RL" - a way to teach robots to navigate their worlds which combines human-designed algorithms that use roadmaps with deep-learned algorithms to control the robot itself - won a best paper award at the ICRA robotics conference!
I talked a little bit about how PRM-RL works in the post "Learning to Drive ... by Learning Where You Can Drive", so I won't go over the whole spiel here - but the basic idea is that we've gotten good at teaching robots to control themselves using a technique called deep reinforcement learning (the RL in PRM-RL) that trains them in simulation, but it's hard to extend this approach to long-range navigation problems in the real world; we overcome this barrier by using a more traditional robotic approach, probabilistic roadmaps (the PRM in PRM-RL), which build maps of where the robot can drive using point to point connections; we combine these maps with the robot simulator and, boom, we have a map of where the robot thinks it can successfully drive.
We were cited not just for this technique, but for testing it extensively in simulation and on two different kinds of robots. I want to thank everyone on the team - especially Sandra Faust for her background in PRMs and for taking point on the idea (and doing all the quadrotor work with Lydia Tapia), for Oscar Ramirez and Marek Fiser for their work on our reinforcement learning framework and simulator, for Kenneth Oslund for his heroic last-minute push to collect the indoor robot navigation data, and to our manager James for his guidance, contributions to the paper and support of our navigation work.
Woohoo! Thanks again everyone!
-the Centaur