Press "Enter" to skip to content

The Centaur’s Guide to the Game Developers Conference

centaur 1

gdc2013logo.png

Once again it’s time for GDC, the Game Developers Conference. This annual kickstart to my computational creativity is held in the Moscone Center in San Francisco, CA and attracts roughly twenty thousand developers from all over the world.

I’m interested primarily in artificial intelligence for computer games– “Game AI” – and in the past few years they’ve had an AI Summit where game AI programmers can get together to hear neat talks about progress in the field.

Coming from an Academic AI background, what I like about Game AI is that it can’t not work. The AI for a game must work, come hell or high water. It doesn’t need to be principled. It doesn’t need to be real. It can be a random number generator. But it needs to appear to work—it has to affect gameplay, and users have to notice it.

gdc2013aisummit.png

That having been said, there are an enormous number of things getting standard in game artificial intelligence – agents and their properties, actions and decision algorithms, pathfinding and visibility, multiple agent interactions, animation and intent communication, and so forth – and they’re getting better all the time.

I know this is what I’m interested in, so I go to the AI Summit on Monday and Tuesday, some subset of the AI Roundtables, other programming, animation, and tooling talks, and if I can make it, the AI Programmer’s Dinner on Friday night. But if game AI isn’t your bag, what should you do? What should you see?

gdc2013people.png

If you haven’t been before, GDC can be overwhelming. Obviously, try to go to talks that you like, but how do you navigate this enormous complex in downtown San Francisco? I’ve blogged about this before, but it’s worth a refresher. Here are a few tips that I’ve found improve my experience.

Get your stuff done before you arrive. There is a LOT to see at GDC, and every year it seems that a last minute videoconference bleeds over into some talk that I want to see, or some programming task bumps the timeslot I set aside for a blogpost, or a writing task that does the same. Try to get this stuff done before you arrive.

Build a schedule before the conference. You’ll change your mind the day of, but GDC has a great schedule builder that lets you quickly and easily find candidate talks. Use it, email yourself a copy, print one out, save a PDF, whatever. It will help you know where you need to go.

Get a nearby hotel. The 5th and Minna Garage near GDC is very convenient, but driving there, even just in the City, is a pain. GDC hotels are done several months in advance, but if you hunt on Expedia or your favorite aggregator you might find something. Read the reviews carefully and doublecheck with Yelp so you don’t get bedbugs or mugged.

Check in the day before. Stuff starts really early, so if you want to get to early talks, don’t even bother to fly in the same day. I know this seems obvious, but this isn’t a conference that starts at 5pm on the first day with a reception. The first content-filled talks start at 10am on Monday. Challenge mode: you can check in Sunday if you arrive early enough.

mozcafe.png

Leave early, find breakfast. Some people don’t care about food, and there’s snacks onsite. Grab a crossaint and cola, or banana and coffee, or whatever. But if you power-up via a good hot breakfast, there are a number of great places to eat nearby – the splendiferous Mo’z Café and the greasy spoon Mel’s leap to mind, but hey, Yelp. A sea of GDC people will be there, and you’ll have the opportunity to network, peoplewatch, and go through your schedule again, even if you don’t find someone to strike up a conversation with.

Ask people who’ve been before what they recommend. This post got started when I left early, got breakfast at Mo’z, and then let some random dude sit down on the table opposite me because the place was too crowded. He didn’t want to disturb my reading, but we talked anyway, and he admitted: “I’ve never been before? What do I do?” Well, I gave him some advice … and then packaged it up into this blogpost. (And this one.)

Network, network, network. Bring business cards. (I am so bad at this!) Take business cards. Introduce yourself to people (but don’t be pushy). Ask what they’re up to. Even if you are looking for a job, you’re not looking for a job: you want people to get to know you first before you stick your hand out. Even if you’re not really looking for a job, you are really looking for a job, three, five or ten years later. I got hired into the Search Engine that Starts with a G from GDC … and I wasn’t even looking.

Learn, learn, learn. Find talks that look like they may answer questions related to problems that you have in your job. Find talks that look directly related to your job. Find talks that look vaguely related to your job. Comb the Expo floor looking for booths that have information even remotely related to your job. Scour the GDC Bookstore for books on anything interesting – but while you’re here: learn, learn, learn.

gdc2013expofloor.png

Leave early if you want lunch or dinner. If you don’t care about a quiet lunch, or you’ve got a group of friends you want to hang with, or colleagues you need to meet with, or have found some people you want to talk to, go with the flow, and feel comfortable using your 30 minute wait to network. But if you’re a harried, slightly antisocial writer with not enough hours in the day needing to work on his or her writing projects aaa aaa they’re chasing me, then leave about 10 minutes before the lunch or dinner rush to find dinner. Nearby places just off the beaten path like the enormous Chevy’s or the slightly farther ’wichcraft are your friends.

Find groups or parties or events to go to. I usually have an already booked schedule, but there are many evening parties. Roundtables break up with people heading to lunch or dinner. There may be guilds or groups or clubs or societies relating to your particular area; find them, and find out where they meet or dine or party or booze. And then network.

gdc2013roundtables.png

Hit Roundtables in person; hit the GDC Vault for conflicts. There are too many talks to go. Really. You’ll have to make sacrifices. Postmortems on classic games are great talks to go to, but pro tip: the GDC Roundtables, where seasoned pros jam with novices trying to answer their questions, are not generally recorded. All other talks usually end up on the GDC Vault, a collection of online recordings of all past sessions, which is expensive unless you…

Get an All Access Pass. Yes, it is expensive. Maybe your company will pay for it; maybe it won’t. But if you really are interested in game development, it’s totally worth it. Bonus: if you come back from year to year, you can get an Alumni discount if you order early. Double bonus: it comes with a GDC Vault subscription.

gdc2013chevys.png

Don’t Commit to Every Talk. There are too many talks to go to. Really. You’ll have to make sacrifices. Make sure you hit the Expo floor. Make sure you meet with friends. Make sure you make an effort to find some friends. Make time to see some of San Francisco. Don’t wear yourself out: go to as much as you can, then soak the rest of it in. Give yourself a breather. Give yourself an extra ten minutes between talks. Heck, leave a talk if you have to if it isn’t panning out, and find a more interesting one.

Get out of your comfort zone. If you’re a programmer, go to a design talk. If you’re a designer, go to a programming talk. Both of you could probably benefit from sitting in on an audio or animation talk, or to get more details about production. What did I say about learn, learn, learn?

Most importantly, have fun. Games are about fun. Producing them can be hard work, but GDC should not feel like work. It should feel like a grand adventure, where you explore parts of the game development experience you haven’t before, an experience of discovery where you recharge your batteries, reconnect with your field, and return home eager to start coding games once again.

-the Centaur

Pictured: The GDC North Hall staircase, with the mammoth holographic projected GDC logo hovering over it. Note: there is no mammoth holographic projected logo. After that, breakfast at Mo'z, the Expo floor, the Roundtables, and lunch at Chevy's.

An open letter to people who do presentations

centaur 0

presentations.png

I’ve seen many presentations that work: presentations with a few slides, with many slides, with no slides. Presentations with text-heavy slides, with image-heavy slides, with a few bullet points, even hand scrawled. Presentations done almost entirely by a sequence of demos; presentations given off the cuff sans microphone.

But there are a lot of things that don’t work in presentations, and I think it comes down to one root problem: presenters don’t realize they are not their audience. You should know, as a presenter, that you aren’t your audience: you’re presenting, they’re listening, you know what you’re going to say, they don’t.

But recently, I’ve had evidence otherwise. Presenters that seem to think you know what they’re thinking. Presenters that seem to think you have access to their slides. Presenters that seem that you are in on every private joke that they tell. Presenters that not only seem to think that they are standing on the podium with them, but are like them in every way – and like them as well.

Look, let’s be honest. Everyone is unique, and as a presenter, you’re more unique than everyone else. [u*nique |yo͞oˈnēk| adj, def (2): distinctive, remarkable, special, or unusual: a person unique enough to give him a microphone for forty-five minutes]. So your audience is not like you — or they wouldn’t have given you a podium. The room before that podium is filled with people all different from you.

How are they different?

  • First off, they don’t have your slides. Fine, you can show them to them. But they haven’t read your slides. They don’t know what’s on your slides. They can’t read them as fast as you can flip through them. Heck, you can’t read them as fast as you can flip through them. You have to give them the audience time to read your slides.

  • Second, they don’t know what you know. They can’t read slides which are elliptical and don’t get to the point. They can’t read details printed only in your slide notes. They can’t read details only on your web site. The only thing they get is what you say and show. If you don’t say it or show it, the audience won’t know it.
  • Third, they probably don’t know you. But that’s not an excuse to pour your heart and soul into your presentation. It’s especially not a reason to pour your heart and soul into your bio slide. Your audience does not want to get to know you. They want to know what you know. That’s an excuse to pour into it what they came to hear.
  • Fourth, your audience may not even like you. That’s not your fault: they don’t probably know you. But that’s not an excuse to sacrifice content for long, drawn out, extended jokes. Your audience isn’t there to be entertained by you. We call that standup. Humor is an important part of presentations, but only as a balanced part. We don’t call a pile of sugar a meal; we call it an invitation to hyperglycemic shock.
  • Fifth, your audience came to see other people than you. You showed up to give your presentation; they came to see a sequence of them. So, after following a too-fast presentation where the previous too-fast presenter popped up a link to his slide notes, please, for the love of G*d, don’t hop up on stage and immediately slap up your detailed bio slide before we’ve had time to write down the tiny URL.

Look, I don’t want to throw a lot of rules at you. I know some people say “no more than 3 bullets per slide, no more than 1 slide per 2 minutes” but I’ve seen Scott McCloud give a talk with maybe triple that density, and his daughter Sky McCloud is even faster and better. There are no rules. Just use common sense.

  • Don’t jam a 45 minute talk into 25 minutes. Cut something out.
  • Don’t have a 10 minute funny video at a technical conference. Cut it in half.
  • Don’t leap up on stage to show your bio slide before the previous presenter is done talking. Wait for people to write down the slides.
  • Don’t “let the audience drive the talk with questions.” They came to hear your efforts to distill your wisdom, not to hear your off-the-cuff answers to irrelevant questions from the audience.
  • Don’t end without leaving time for questions. Who knows, you may have made a mistake.

Ok. That’s off my chest.

Now to dive back into the fray…

-the Centaur

Pictured: A slide from ... axually a pretty good talk at GDC, not one of the ones that prompted the letter above.

Back to the Future with the Old Reader

centaur 0

theoldreader.png

As I mentioned in a previous post, Google Reader is going away. If you don't use RSS feeds, this service may be mystifying to you, but think of it this way: imagine, instead of getting a bunch of Facebook, Google+ or Twitter randomized micro-posts, you could get a steady stream of high-quality articles just from the people you like and admire? Yeah. RSS. It's like that.

So anyway, the Reader shutdown. I have a lot of thoughts about that, as do many other people, but the first one is: what the heck do I do? I use Reader on average about seven times a day. I'm certainly not going to hope Google change their minds, and even if they do, my trust is gone. Fortunately, there are a number of alternatives, which people have blogged about here and here.

The one I want to report on today is The Old Reader, the first one I tried. AWESOME. In more detail, this is what I found:

  • It has most, though not all, features of Google Reader. It's got creaky corners that sometimes make it look like features are broken, but as I've dug into it, almost everything is there and works pretty great.
  • It was able to import all my feeds I exported via Google Takeout. Their servers are pretty slow, so it actually took a few days, and they did it two passes. But they sent me an email when it was done, and they got everything.
  • The team is insanely responsive. They're just three guys - but when I found a problem with the Add Subscription button, they fixed it in just a couple of days. Amazing. More responsive than other companies I know.

There are drawbacks, most notably: they don't yet have an equivalent for Google Takeout's OPML export. But, they are only three guys. They just started taking money, which is a good sign that they might stay around. Here's hoping they are able to build a business on this, and that they have the same commitment to openness that Google had.

I plan to try other feed readers, as I can't be trapped into one product as I was before, but kudos to The Old Reader team for quickly and painlessly rescuing me from the First Great Internet Apocalypse of 2013. I feel like I'm just using Reader, except now I have a warm fuzzy that my beloved service isn't going to get neglected until it withers away.

-the Centaur

Context-Directed Spreading Activation

centaur 0
netsphere.png Let me completely up front about my motivation for writing this post: recently, I came across a paper which was similar to the work in my PhD thesis, but applied to a different area. The paper didn’t cite my work – in fact, its survey of related work in the area seemed to indicate that no prior work along the lines of mine existed – and when I alerted the authors to the omission, they informed me they’d cited all relevant work, and claimed “my obscure dissertation probably wasn’t relevant.” Clearly, I haven’t done a good enough job articulating or promoting my work, so I thought I should take a moment to explain what I did for my doctoral dissertation. My research improved computer memory by modeling it after human memory. People remember different things in different contexts based on how different pieces of information are connected to one another. Even a word as simple as ‘ford’ can call different things to mind depending on whether you’ve bought a popular brand of car, watched the credits of an Indiana Jones movie, or tried to cross the shallow part of a river. Based on that human phenomenon, I built a memory retrieval engine that used context to remember relevant things more quickly. My approach was based on a technique I called context directed spreading activation, which I argued was an advance over so-called “traditional” spreading activation. Spreading activation is a technique for finding information in a kind of computer memory called semantic networks, which model relationships in the human mind. A semantic network represents knowledge as a graph, with concepts as nodes and relationships between concepts as links, and traditional spreading activation finds information in that network by starting with a set of “query” nodes and propagating “activation” out on the links, like current in an electric circuit. The current that hits each node in the network determines how highly ranked the node is for a query. (If you understand circuits and spreading activation, and this description caused you to catch on fire, my apologies. I’ll be more precise in future blogposts. Roll with it). The problem is, as semantic networks grow large, there’s a heck of a lot of activation to propagate. My approach, context directed spreading activation (CDSA), cuts this cost dramatically by making activation propagate over fewer types of links. In CDSA, each link has a type, each type has a node, and activation propagates only over links whose nodes are active (to a very rough first approximation, although in my evaluations I tested about every variant of this under the sun). Propagating over active links isn’t just cheaper than spreading activation over every link; it’s smarter: the same “query” nodes can activate different parts of the network, depending on which “context” nodes are active. So, if you design your network right, Harrison Ford is never going to occur to you if you’ve been thinking about cars. I was a typical graduate student, and I thought my approach was so good, it was good for everything—so I built an entire cognitive architecture around the idea. (Cognitive architectures are general reasoning systems, normally built by teams of researchers, and building even a small one is part of the reason my PhD thesis took ten years, but I digress.) My cognitive architecture was called context sensitive asynchronous memory (CSAM), and it automatically collected context while the system was thinking, fed it into the context-directed spreading activation system, and incorporated dynamically remembered information into its ongoing thought processes using patch programs called integration mechanisms. CSAM wasn’t just an idea: I built it out into a computer program called Nicole, and even published a workshop paper on it in 1997 called “Can Your Architecture Do This? A Proposal for Impasse-Driven Asynchronous Memory Retrieval and Integration.” But to get a PhD in artificial intelligence, you need more than a clever idea you’ve written up in a paper or implemented in a computer program. You need to use the program you’ve written to answer a scientific question. You need to show that your system works in the domains you claim it works in, that it can solve the problems that you claim it can solve, and that it’s better than other approaches, if other approaches exist. So I tested Nicole on computer planning systems and showed that integration mechanisms worked. Then I and a colleague tested Nicole on a natural language understanding program and showed that memory retrieval worked. But the most important part was showing that CDSA, the heart of the theory, didn’t just work, but was better than the alternatives. I did a detailed analysis of the theory of CDSA and showed it was better than traditional spreading activation in several ways—but that rightly wasn’t enough for my committee. They wanted an example. There were alternatives to my approach, and they wanted to see that my approach was better than the alternatives for real problems. So I turned Nicole into an information retrieval system called IRIA—the Information Retrieval Intelligent Assistant. By this time, the dot-com boom was in full swing, and my thesis advisor invited me and another graduate student to join him starting a company called Enkia. We tried many different concepts to start with, but the further we went, the more IRIA seemed to have legs. We showed she could recommend useful information to people while browsing the Internet. We showed several people could use her at the same time and get useful feedback. And critically, we showed that by using context-directed spreading activation, IRIA could retrieve better information faster than traditional spreading activation approaches. The first publication on IRIA came out in 2000, shortly before I got my PhD thesis, and at the company things were going gangbusters. We found customers for the idea, my more experienced colleagues and I turned the IRIA program from a typical graduate student mess into a more disciplined and efficient system called the Enkion, a process we documented in a paper in early 2001. We even launched a search site called Search Orbit—and then the whole dot-com disaster happened, and the company essentially imploded. Actually, that’s not fair: the company continued for many years after I left—but I essentially imploded, and if you want to know more about that, read “Approaching 33, as Seen from 44.” Regardless, the upshot is that I didn’t follow up on my thesis work after I finished my PhD. That happens to a lot of PhD students, but for me in particular I felt that it would have been betraying the trust of my colleagues to go publish a sequence of papers on the innards of a program they were trying to use to run their business. Eventually, they moved on to new software, but by that time, so had I. Fast forward to 2012, and while researching an unrelated problem for The Search Engine That Starts With A G, I came across the 2006 paper “Recommending in context: A spreading activation model that is independent of the type of recommender system and its contents” by Alexander Kovács and Haruki Ueno. At Enkia, we’d thought of doing recommender systems on top of the Enkion, and had even started to build a prototype for Emory University, but the idea never took off and we never generated any publications, so at first, I was pleased to see someone doing spreading activation work in recommender systems. Then I was unnerved to see that this approach also involved spreading activation, over a typed network, with nodes representing the types of links, and activation in the type nodes changing the way activation propagated over the links. Then I was unsettled to see that my work, which is based on a similar idea and predates their publication by almost a decade, was not cited in the paper. Then I was actually disturbed when I read: “The details of spreading activation networks in the literature differ considerably. However, they’re all equal with respect to how they handle context … context nodes do not modulate links at all…” If you were to take that at face value, the work that I did over ten years of my life—work which produced four papers, a PhD thesis, and at one point helped employ thirty people—did not exist. Now, I was also surprised by some spooky similarities between their systems and mine—their system is built on a context-directed spreading activation model, mine is a context-directed spreading activation model, theirs is called CASAN, mine is embedded in a system called CSAM—but as far as I can see there’s NO evidence that their work was derivative of mine. As Chris Atkinson said to a friend of mine (paraphrased): “The great beam of intelligence is more like a shotgun: good ideas land on lots of people all over the world—not just on you.” In fact, I’d argue that their work is a real advance to the field. Their model is similar, not identical, and their mathematical formalism uses more contemporary matrix algebra, making the relationship to related approaches like Page Rank more clear (see Google Page Rank and Beyond). Plus, they apparently got their approach to work on recommender systems, which we did not; IRIA did more straight up recommendation of information in traditional information retrieval, which is a similar but not identical problem. So Kovács and Ueno’s “Recommending in Context” paper is a great paper and you should read it if you’re into this kind of stuff. But, to set the record straight, and maybe to be a little bit petty, there are a number of spreading activation systems that do use context to modulate links in the network … most notably mine. -the Centaur Pictured: a tiny chunk of the WordNet online dictionary, which I’m using as a proxy of a semantic network. Data processing by me in Python, graph representation by the GraphViz suite’s dot program, and postprocessing by me in Adobe Photoshop.

A Ray of Hoops

centaur 0

rayofhope.png

So, after my scare over almost losing 150+ files on Google Drive, I've made some progress on integrating Google Drive and Dropbox using cloudHQ. The reason it wasn't completely seamless is that I use both Google Drive and Dropbox on my primary personal laptop, and cannot afford to have two copies of all files on this one machine. The other half of this problem is that if you only set up partial sync of certain folders, then any new files added to the top folder of Google Drive or Dropbox won't get replicated - and believe it or not, that's already happened to me. So I need a "reliable scheme" I can count on.

The solution? Set up a master folder on Google Drive called "Replicated", in which everything that I want to keep - all my Google Docs, in particular - will get copied to a folder of the same name called "Replicated" in Dropbox. For good measure, set up another replication pair for the Shared folder of Google Drive. The remaining files, all the Pictures I've stored because of Google Drive's great bang for the buck storage deal, don't need to be replicated here.

The reason this works is that if you obey the simple anal-retentive policy of creating all your Google Docs within a named folder, and you put all your named folders under Replicated, then they all automatically get copied to Dropbox as documents. I've even seen it in action, as I edit Google Docs and Dropbox informs me that new copies of documents in Microsoft Word .docx format are appearing in my drive. Success!

At last, I've found a way to reliably use Google Drive cloud. Google doesn't always support the features you want, or the patterns of usage that you want, but they're deeply committed to open APIs, to data liberation, and to the creation of third party applications that enable you to fill the gaps in Google's services so that you aren't locked in to one solution.

Breaking News: Google Reader canceled. G*d dammit, Google…

Next up: after my scare of losing Google Reader, a report on my progress using The Old Reader to rescue my feeds...

-the Centaur

Pictured: A table candle at Cascal's in Mountain View, Ca...

Rescuing Google Drive?

centaur 0

IMG_20121029_235958.jpg

Ok, the above is a rescue cat, but the point remains. In an earlier post I understandably got a bit miffed when moving a folder within Google Drive - an operation I've done before, many times - mysteriously deleted over a hundred and fifty files. I was able to rescue them, but I felt like I couldn't trust Google Drive - a feeling confirmed when the very next time I used it to collect some quick notes, the application crashed.

But I love the workflow of Google Drive - the home page of Google Drive can show you, very very quickly, either your hierarchy of folders, your recently accessed files, or a search of all your files, and once you've found a file it appears far quicker than most normal applications like Microsoft Word, Microsoft Excel, or Photoshop. Word, Excel and Photoshop kick Google Drive's ass on specialized uses, but many documents don't need that, and Google Drive is a great alternative.

But what about files disappearing? A non starter. However, there are ways around that problem.

Google Drive of course has the ability to export files. You can even export an entire directory in this fashion. If you really want to get serious, you can use Google Takeout, a data migration tool by Google which enables you to export all your Google Drive data, part of Google's Data Liberation Front.

But all those rely on one time manual operations. I want something that works automatically, so for my money it's the Google Drive API that really comes to the rescue. That enables developers to create applications like cloudHQ, which syncs between Google Drive, Dropbox and several other services. I've tried out cloudHQ experimentally and it works on a single folder.

Next I'm going to try it on a larger scale, though it will require a little re-sorting of how I've got Dropbox and Google Drive working. Most likely, I'm going to need to either uninstall Google Drive from my primary computer and sync all its files into Dropbox by CloudHQ, or else manually unsyc certain folders so I don't get double-storage on this machine.

Regardless, there is a silver lining. Now let's see if it's also a silver bullet.

-the Centaur

Pictured: Me holding Loki, our outdoor rescue cat. He's large marge, let me tell you.

The End for Google Drive

centaur 0

Screen Shot 2013-03-03 at 1.13.54 PM.png

Recently I was doing some task and needed to track down some information. I couldn't find the document I wanted at first in my Google Drive, but once I did, I realized I had several documents, all on the same topic, so I did with Google Drive the same thing I'd done before on Google Drive: I went to the Google Drive folder and reorganized the files.

Big mistake.

Quickly red "x's" started appearing in my folders. More and more "unsyncable" files started showing up in the Google Drive status list. And then a status message popped up: "The files you have deleted are now in Google Drive's Trash."

Uh-oh.

Understand: I had deleted no files or folders. I simply moved them around - and I've done this before. A lot. On Google Drive, not just Dropbox. But something apparently happened in the sync, and Google Drive thought I'd deleted the folders.

So it trashed all those files.

Understand, Google Drive "documents" on your hard drive aren't "documents"; they're little text files with pointers to a location in Google Drive, like this (where UNREADABLE_IDENTIFIER is a string of alphanumeric gobbledegook):

{"url": "https://docs.google.com/document/d/UNREADABLE_IDENTIFIER/edit", "resource_id": "document:UNREADABLE_IDENTIFIER"}

This pathetic little bit of nonsense is all I would have had left of a 200 word start to an essay - if I hadn't acted quickly. I started to look online, and found this alarming bit of information:

https://support.google.com/drive/bin/answer.py?hl=en&answer=2375102

Declutter your Google Drive by removing unwanted and outdated files, folders, and Google Docs from your Google Drive. Anything that you own and remove from Google Drive will be in the trash until you permanently delete or restore them.

Moving Google Docs files out of your Google Drive folder will cause their counterpart files on the web to be moved to the trash. If you then purge the trash, those files will become permanently inaccessible. Because the Docs files in your Drive folder are essentially links to files that exist online, moving these files back into your Drive folder after purging the trash online will not restore the files, as their online counterparts will have been deleted.

OMG! The contents of my documents may be lost forever if I purge the trash. But it gets worse...

http://support.google.com/drive/bin/answer.py?hl=en&answer=2494934

If something in Google Drive is moved to the trash, you'll see a warning and you may lose access to it at any time. Read one of the following sections to learn how to restore it to your Google Drive from the trash. When you restore something, it'll be recovered in Google Drive on the web, to the Google Drive folder on your computer, and to your mobile devices.

If the item is in a folder, you’ll need to restore the entire folder to recover any individual items inside of it.

So I quickly returned to Google Drive. Everything you see above with a little red X was gone, all those files and 150 more. I hunted down the Trash (which was harder than you might think, as there was some persistent search in my Google Drive window that was removing the Trash folder from my view) and restored EVERYTHING that I had never deleted in the first place.

Now, this shouldn't have been a surprise. I always knew this could happen, ever since I gladly installed Google Drive on on my Mac in the hope that it would data liberate the Google Documents I had, only to find in my horror that Google Drive wasn't a syncing system, like Dropbox, but a cloud system, which is useless.

In case anyone misses the point: If you use Google Drive to store documents and also have the Google Drive client stored on a machine, Google Drive can get tricked into thinking you've deleted files, at which point it will move them to the Trash, at which point, unlike things you've deliberately trashed, it can delete them at any time - and you'll never get them back.

After some thought, I'm calling a hard stop on all use of Google Documents, except those I'm using to collaborate with others, where the collaboration features of the Google Doc outweigh the potential of risk. I can always save those files to a hard backup of a Word document or an Excel spreadsheet.

But I work for a living as a writer. And I can't work with a system that can arbitrarily trash hundreds of files and thousands upon thousands of words of documents with no hope of recovery just because I moved a folder … correctly.

Like Ecto, I have to rethink my use of these online tools - rethink them in a way that ensures that for every significant thing that I use in some convenient online system, I have a saved copy in an archivable backup.

More updates as I develop a new system.

-the Centaur

Feral Animal Update

centaur 0

theskunk.png

He is very cute, but we are not adopting this one.

And we'll be a little more careful about leaving the food bowls out when Loki is done from now on. We sure don't want a skunk-soaked cat - my wife has already dealt with that once earlier in her life and it is not a way of having the fun.

-the Centaur

Approaching 33, Seen from 44

centaur 0

33-to-44.png

I operate with a long range planning horizon – I have lists of what I want to do in a day, a week, a month, a year, five years, and even my life. Not all my goals are fulfilled, of course, but I believe in the philosophy “People overestimate what they can do in a year, but underestimate what they can do in a decade.”

Recently, I’ve had that proven to me.

I’m an enormous packrat, and keep a huge variety of old papers and materials. Some people deal with clutter by adopting the philosophy “if you haven’t touched it in six months, throw it away.” Clearly, these people don’t write for a living.

So, in an old notebook, uncovered on one of my periodic archaeological expeditions in my library, I found an essay – a diary entry, really – written just before my 33rd birthday, entitled “Approaching 33” – and I find its perspective fascinating, especially when you compare what I was worried about then with where I am now.

“Approaching 33” was written on the fifth of November, 2011. That’s about five years after I split with my ex-fiancee, but a year before I met my future wife. It’s about a year after I finished my nearly decade-long slog to get my PhD, but ten years before when I got a job that truly used my degree. It’s about seven months after I reluctantly quit the dot-com I helped found to care for my dying father, but only about six months after my Dad actually died. And it’s about 2 months after 9/11, and about a month after disagreements over 9/11 caused huge rifts among my friends.

In that context, this is what I wrote on the fifth of November, 2011:

Approaching 33, your life seems seriously off-track. Your chances of following up on the PhD program are minimal – you will not get a good faculty job. And you are starting too late to tackle software development; you are behind the curve. Nor are you on track for being a writer.

The PhD program was a complete mistake. You wasted ten years of your life on a PhD and on your ex-fiancee. What a loser.

Now you approach middle fucking age – 38 – and are not on the career track, are not on the runway. You are stalled, lacking the crucial management, leadership and discipline skills you need to truly succeed.

Waste not time with useful affirmations – first understand the problem, set goals, fix things and move on. It is possible, only if you face clearly the challenges which are ahead of you.

You need to pick and embrace a career and a secondary vocation – your main path and your entertainment – in order to advance at either.

Without focus, you will not achieve. Or perhaps you are FULL OF SHIT.

Think Nixon. He had major successes before 33, but major defeats and did not run for office until your age. You can take the positive elements of his example – learn how to manage now, learn discipline now, learn leadership now, by whatever means are morally acceptable.

Then get a move on your career – it is possible. Do what you gotta do and move on with your life!

It appears I was bitter.

Apparently I couldn’t emotionally imagine I could succeed, but recognized, intellectually, that if I focused on what was wrong, and worked at it, then maybe, just maybe, I could fix it. And in the eleven years that have past … I mostly have.

Eleven years ago, I was enormously bitter, and regretted getting my PhD. It took five years, but that PhD and my work at my search-engine dot-com helped land me a great job, and after five more years of work I ended up at a job within that job that used every facet of my degree, from artificial intelligence to information retrieval to robotics to even computer graphics. My career took a serious left turn, but I never gave up trying, and eventually, I succeeded as a direct result of trying.

Eleven years ago, I felt enormously alone, having wasted a lot of time on a one-sided relationship that should have ended naturally after its first year, and having wasted many years after that either alone or hanging on to other relationships that were doomed not to work. But I never stopped looking, and hoping, and it took another couple of years before I found my best friend, and later married her.

Eleven years ago, I felt enormously unsure of my abilities as a software developer. At the dot-com I willingly stepped back from a software lead role when I was asked to deliver on an impossible schedule, a decision that was proved right almost immediately, and later took a quarter’s leave to finish my PhD, a decision that took ten years to prove itself. But even though both of those decisions were right, they started a downward spiral of self-confidence, as we sought out and brought in faster, more experienced developers to take over when I stepped back. While my predictions about the schedule were right, my colleagues nevertheless got more done, more quickly, ultimately culling out almost all of the code I wrote for the company. After a while, I felt I was contributing no more and, at the same time, needed to care for my dying father, so I left. But my father died shortly thereafter, six months before we expected. I found myself unable not to work, thinking it irresponsible even though I had savings, so I found a job at a software company whose technical lead was an old friend that who had been the fastest programmer I’d ever worked with in college, and now who had a decade of experience programming in industry – which is far more rigorous than programming in academia. On top of that, I was still recuperating from an RSI scare I’d had four years earlier, when I’d barely been able to write for six months, much less type. So I wrote those bitter words above when I was quite uncertain about whether I’d be able to cut it as a software developer.

Eleven years later — well, I still wish I could code faster. I’m surrounded by both younger and older programmers who are faster and snappier than I am, and I frequently feel like the dumbest person in the room. But I’ve worked hard to improve, and on top of that, slowly, I’ve come to recognize that I have indeed learned a few things – usually, the hard way, when I let someone talk me out of what I’m sure I know, and am later proved right – and have indeed picked up a few skills – synthetic and organizational skills, subtle and hard to measure, which aren’t needed for a small chunk of code but which are vital as projects grow larger in size and design docs and GANTT charts are needed to keep everything on track. I’d still love to code faster, to get up to speed faster, to be able to juggle more projects at once. But I’m learning, and I’ve launched things as a result of what I’ve learned.

But the most important thing is that I’ve been writing. A year after I wrote that note, I gave National Novel Writing Month a try for the first time. I spent years trying to perfect my craft after that, ultimately finding a writing group focused just on writing and not on critique. Five years later, I gave National Novel Writing Month another try, and wrote FROST MOON, which went on to both win some minor awards and to peak high on a few minor bestseller lists. Five years after that, I’ve finished four novels, have starts to four more, and am still writing.

I have picked my vocation and avocation – I’m a computer programmer, and a writer. I actually think of it as having two jobs, a day job and a night job. At one point I thought I was going to transition to writing full time, and I still plan to, but then my job at work became tremendously exciting. Ten years from now, I hope to be a full time writer (and I already have my next “second job” picked out) but I’m in no rush to leave my current position; I’m going to see where it takes me. I learned that long ago when I had a chance to knuckle down and finish my PhD, or join an unrelated but exciting side project to build a robot pet. The choice to work on the emotion model for that pet indirectly landed me a job at two different search engines, even though it was the skills I learned in my PhD that I was ultimately hired for. The choice to keep working on that emotion model directly led to my current dream job, which is one of the few jobs in the world that required the combined skills of my PhD and side project. Now I’m going to do the same thing: follow the excitement.

Who knows where it will lead? Maybe it will help me develop the leadership skills that I complained about in “Approaching 33.” Maybe it will help me re-awaken my research interests and lead to that faculty job I wanted in “Approaching 33.” Maybe it will just help me build a nest egg so when I finally switch to writing full time, I can pursue it with gusto. Or maybe, just maybe, it’s helping me learn things I can’t even yet imagine how I’ll be using … when I turn 55.

After I sign off this blogpost, I’m going to write “Passing 44.” Most of that’s going to be private, but I can anticipate it. I’ll complain about problems I want to fix with my writing – I want it to be more clear, more compelling, more accessible. I’ll complain about problems I want to fix at work – I want to work faster, to ramp up more quickly, and to juggle more projects well while learning when to say no. And I’ll complain about martial arts and athletics – I want to ramp up working out, to return to running, and to resume my quest for a black belt. And there are more things I want to achieve – wanting to be a better husband, friend, pet owner, person – a lot of which I’m going to keep private until I write “Passing 44, seen from 55.”

I’m going to set bigger goals for the next ten years. Some of them might not come to pass, of course. I bet a year from now, I’ll have only seen the barest movement along some of those axes. But ten years from now … the sky’s the limit.

-the Centaur

Pictured: Me at 33 on the left, me at 44 on the right, over a backdrop shot at my home at 44, including a piece of art by my wife entitled "Petrified Coral".

Ecto, Strike Two

centaur 0

Ecto just ate a HUGE post. Second time this has happened.

Time for a new blogging client?

-the Centaur

Caught Up

centaur 0

lokiyawns.jpeg

For one brief moment, I'm caught up.

For the DOORWAYS TO EXTRA TIME anthology, I knew I was diving off the deep end as I'd never edited an anthology before. So, I recruited a more experienced editor, Trisha Wooldridge, who despite being insanely busy, always managed to stay ahead of me on the schedule of getting edits out to our authors.

Well, for the past few days, Trisha was at Boskone, busily talking up our book, whereas I, in contrast, needed to stay at or near home the whole weekend. The whole long, three-day weekend, in which I managed to get all the edits on my plate out to authors, and then to review the correspondence with all our authors to ensure there was nothing left on my plate.

I've "tossed everything over the cube wall" - and for one brief moment, am caught up.

Back to Daktota Frost.

-the Centaur

Pictured: Loki, our outdoor cat, expressing his enjoyment of food coma.

A Note on the Galaxy Note II

centaur 0

galaxy-note-battery-life.png  

Offered without further comment as a testament to the Galaxy Note II's battery life.

-the Centaur

P.S. Actually, because of a problem in my Google profile, I had to disable Browser and Internet sync, which caused the battery to run out over more like 10 hours. This isn't a problem particular to the Galaxy Note II - it punished my Galaxy Nexus, causing it to run out of juice sometimes by 3pm - but I know two friends who have this ginormous phablet and neither one has battery life issues - one actually got his phone to run for 2 days and still had 50% power left.

On this note, it was also these guys that got me to consider the Galaxy Note II. If you haven't seen someone use one, you might think it huge and unusable with its 5.5 inch screen - one person who saw it asked me if it was a phone or a laptop. But what happens when the phone lands in your hand is that you change the angle at which you're holding it, and it feels natural and light. Finding a belt clip to hold it is an issue, but it easily fit in the pocket of my blazer like it had grown there.

Highly recommended.

Blogging is like a job. One I’m bad at.

centaur 0

lokirests.png

One of the things I've always felt about myself is that I'm slow. I have ideas for fiction, but before I ever develop them, I see them brought to completion by someone else. When I was a child, I had a wonderful story involving spacecraft made to look like sailing ships, only to turn on my television to find that it had been done in Doctor Who.

Next I read Drexler's Engines of Creation shortly after it came out and planned a series of nanotech stories, before I'd ever read another science fiction author dealing with the theme. I was in college, still trying to finish my first novel, which I'd updated to include nanotechnology, when Michael Flynn published The Nanotech Chronicles.

Now in the blogoverse, things have gotten worse.

It's bad enough that my evil twin Warren Ellis, a man only one year older than me, has propelled himself to the pinnacle of the writing profession using only whisky and a cane while still blogging more than anyone could believe. Warren Ellis has his own ideas and I don't feel like we're competing in the same headspace.

No, my it's my nemesis John Scalzi, who has not only beaten me to the punch on the serialized novel The Human Division - I'm pretty sure my own designed-for-serialization novel THE CLOCKWORK TIME MACHINE predates it, but my novel is still in beta draft while his is like, you know, released to accolades and stuff - but also somehow seems to have plugged into my brain by beating my blog to the punch on his Hobbit at 48 Frames Per Second impressions and his attempts to tame a feral cat - I mean, come on! Everyone saw The Hobbit but even if Scalzi has a direct pipeline to my brain, how does one arrange to have a feral cat fortuitously run by one's door so one can tame it right when someone else does? Is there a service for such things? Synchronicity Unlimited?

Now dark mental wizard Caitlin Kiernan has beaten me to the punch by blogging about the correct pronunciation of kudzu.

Sigh.

Alright, thanks, Caitlin, for breaking the ice on one of my pet peeves. For the record: if you are recording an audiobook and have a Southern character speaking or thinking, they will pronounce the Borg-like pest vine kudzu "CUD-zoo." A character who lives in another part of the country can call it "kood-zoo" all they want, but in my 38 years in The South I never heard it pronounced that, nor, after nine months of research, have I been able to find anyone from The South who calls it anything other than "CUD-zoo," nor have any of those people ever heard anyone from anywhere call it anything other than "CUD-zoo". (And Wikipedia backs me - it claims the pronunciation is /ˈkʊdzuː/, with the first u pronounced as the u in full and the second pronounced as the oo in food).

It wasn't so hard to say that, was it? Why didn't I say that earlier, nine months ago, when I first heard it in an audiobook (I think in The Magnolia League, but it might have been Fallen)? I know I've been busy, but how hard was it? But, according to the timestamp on the image I downloaded of Loki at the start of this blogpost, I've been at this "little" blogpost for about an hour.

What I'm saying is, blogging is like a job. You find things, reflect on them, and post about them; it takes time to do it right. But I already work two jobs: I've got a slightly-more-than-full-time job at The Search Engine That Starts With A G, and I'm also a slightly-less-than-full-time writer. So this, my third job, has to come behind hanging out with my wife, friends and cats. I'm taking time out from editing an anthology to write this, and that's taking out time from Dakota Frost #3 and THE CLOCKWORK TIME MACHINE.

So: yes, I know. Lots to say, lots to do. Gun control. The Hobbit. Meteors falling from the sky and a drill making its way to a creepy buried lake in Antarctica. I'm working on it, I'm working on it - but two editors have claim on my writing first, and the provider of the paycheck that pays for this laptop has first claim on my time before that.

So if the freshness date on these blogposts is not always the greatest, well, sorry, but I'm typing as fast as I can.

-the Centaur

Pictured: Loki, our non-feral outdoor cat, who has grown very fat and but not very sassy given lots of love and can food.

The Dark Labyrinths of Lovecraft and Borges

centaur 0
Borges and Lovecraft v2.png In many ways, Howard Philips Lovecraft and Jorge Luis Borges are different. Howard Philips Lovecraft wrote dark, atmospheric American horror at the dawn of the twentieth century. Jorge Luis Borges, born ten years later, wrote learned, ethnic Argentinian magic realism.

Lovecraft toiled in obscurity, writing for pulps; Borges was crowned with every prize the literary world has to offer short of the Nobel. Lovecraft was a high school dropout; Borges was a renowned professor of literature.

But in many ways, Howard Philips Lovecraft and Jorge Luis Borges are similar.

There’s the obvious: both the dropout and the professor were masters of erudition, capable of bring a vast number of literary techniques to their stories. Both focused largely on stories that were deeply regional, steeped in the lore of the cultures that they loved. And both were obsessed with odd details: for Borges, the labyrinth, the knife, and the tango; for Lovecraft, tangled streets, dark forests, and fishy odors.

But the important similarities between Lovecraft and Borges run far deeper.

Borges plays games with the infinite, constructing labyrinths of time and symbols that dig at the foundations of our concepts of thought and identity. His most famous story, “The Library of Babel,” imagines an infinite library filled with useless books, whose meaning might only be discerned by the allseeing eye of a god—a story that plays with ideas of faith in a random universe.

Lovecraft plays games with the cosmos, constructing vistas of time and space that threaten the foundations of our concepts of safety and knowledge. His most famous story, “The Call of Cthulhu,” imagines an undersea city inhabited by an enormous monster, whose existence threatens the sanity of humanity—a story that plays with ideas of fear and cosmic insignificance.

Borges and Lovecraft are similar, but not identical.

In Borges, the supernatural rarely breaks into the natural world openly, and when it does, it happens in dreams and visions or subtle events. The supernatural is subtle, but the meaning is not: Borges often tells us his aim directly in his stories, frequently writing them like essays that explore their own morals, or examining their meaning in conversations with himself. Borges plumbs the depths of human thought through stories that show us the vast scale of conceptual space. Throughout his work is a taste of nihilism: humans seeking meaning in a meaningless cosmos.

In Lovecraft, the supernatural manifests in dreams and vision and subtle events, but it always breaks into to the natural world openly. The supernatural is not subtle, but the meaning is: Lovecraft rarely tells us his aim directly in his stories, instead writing essays that explain their morals, or examining their meaning in letters to friends. He explores the cosmic through metaphor. Lovecraft plumbs the depths of human insignificance through stories that show us the vast scale of physical space. Throughout his work is a taste of nihilism: humans seeking sanity in an inhuman cosmos.

Lovecraft and Borges are two sides of the same coin.

They write about the same terrors. In Borges, the monsters swim beneath the surface, their shapes only dimly suggested by the churning existential confusion left in their wakes. In Lovecraft, the monsters break the surface, turn their dripping, shaggy visages towards the horrified faces of his protagonists, and show us that if we could truly see what Borges only hints at, we would surely go mad.

-the Centaur

Credits: public domain images of Lovecraft and Borges both from Wikimedia Commons; composition by me.

Going Gonzo

centaur 2

IMG_20130126_140326.jpg

It would be hard to adequately describe the story I'm working on now in between gaps of finishing up the anthology Doorways to Extra Time, but from the reading list I have above, you can fairly assume it's going to be gonzo.

Of course, everything that has Jeremiah Willstone in it is a bit gonzo.

-the Centaur

Practically Vegan Scallops and Grits

centaur 0

IMG_20130123_222505.jpg

I'm not vegan - I'm a carnivore. But there are lots of reasons not to eat meat: many different schools of dietary science recommend restricting your meat intake, meat preparation often involves unnecessary cruelty to animals, raising animals for meat causes more environmental damage than plant harvesting, and it's more expensive.

Plus, my wife is almost vegetarian, so avoiding meat when we are together makes it easier for us to share.

But one of the Southern foods I love, shrimp and grits, is not vegetarian. I love it for the flavor, and for the science: shrimp and scallops are both small food items, so they cool rapidly; embedding them in a bowl of hot grits both keeps them warm and imparts flavor to the grits. And besides, man, come on: cheesy grits and hot sautéed shrimp or scallops. How can you go wrong?

But I'm always thinking of how to adapt dishes so my wife and I can eat together. And it struck me: one of the things we love to eat is baked cauliflower. My wife chops cauliflower up into small florets, brushes them with a little olive oil and a little seasoning to taste (paprika, seafood seasoning, even wild and crazy things like allspice or nutmeg might work) and cooks them until they're turning crispy.

So why can't that be put on grits in place of shrimp?

We talked about it and agreed to the idea. A simple salad - organic greens, mango, walnuts, inspired by a salad from Aqui - grits, and vegan attempts at scallops and shrimp. After we agreed to the menu, I researched recipes online, found a mushroom based scallop recipe, and called back to confirm with her what she wanted. We cut out a few things from the recipes she didn't want (the cheese, etc) and I picked it all up.

What can I say? It turned out awesome. We both went back for seconds, and were so taken by the shrimp and grits we forgot to eat the bread we'd prepared and she actually never got to her salad before she was full. (I ate mine, though. :-) Here are the pieces of what we did, and then I'll tell you how we put them together into a meal. All of the below served two people, and we were overly full.

Mango Walnut Salad

We always eat salad; I in particular need it with almost every meal, or it doesn't go down well. Even in full on carnivore mode, I'd rather have steak and salad than steak and potatoes. To partially recreate my favorite mango walnut salad from Aqui … the ingredients:

  • 1 package organic greens (preferably washed, but hey…)
  • 1 mango
  • 1/2 pint strawberries
  • Chopped walnuts
  • Dried cranberries
  • Sweet dressing (preferably mango, but what floats your boat…)

Wash or otherwise prepare the organic greens. Dice the mango. Wash and slice the strawberries. Assemble the salad by adding greens, walnuts, cranberries, mango and strawberries to taste (Obviously, we did not use all the ingredients; my wife will be eating from the above for several days, as is her habit). Add dressing to taste or make the dressing available on the table (axually, me and my wife often eat the salad dry, but that's an idiosyncrasy from training ourselves to use very little dressing).

French Bread Toast

One good addition to shrimp and grits is toast. We don't even use cheesy toast or garlic bread, just a good french bread. Ingredients:

  • 1 loaf French bread

Slice off several pieces; do so diagonally if you want more surface area. Toast until your favorite degree of brown. Not hard. Don't forget to eat this as you are scarfing down grits later, or you will be sad that you have missed part of the experience.

Crispy Cauliflower Vegan Shrimp

These don't taste that much like shrimp, really, but they serve the same role in the dish and they're awesome. Ingredients:

Preheat an oven to … uh, I dunno, my wife cooked this. Preheat it to something or other that's really hot. (UPDATE: my wife says to heat it to 450 to 430). Chop up or break up the cauliflower into small florets, a bit larger than a shrimp. Let it dry before you put the oil on or it is soggy (but that doesn't taste bad either). Put the cauliflower in a bowl and add enough olive oil to coat them without making them soggy - just kind of drizzle it on and stir it around real good. Add spices - use seafood spices if you want a shrimpy taste. The great thing about cauliflower is you can use almost any seasoning you want. Paprika oddly doesn't have much flavor, my wife claims; dill is better, but the ones listed above are her three favorite. Place tray in oven. Leave for ~1 hour-ish, removing when the cauliflower starts to crinkle up. You have a LOT of leeway on this, as roasted cauliflower is edible and delicious all the way from almost purely raw to shriveled and almost burnt.

King Oyster Mushroom Vegan Scallops

These are the thing that make the dish. I adapted the recipe King Oyster Mushroom Vegan Scallops to make this and it was awesome. We used only four oyster mushrooms and were sad; we should have used 8. Ingredients:

  • 8 "king" size oyster mushrooms
  • 1 Shallot or other oniony thing
  • Ground pepper
  • Soy sauce, tamari sauce, or an equivalent salty marinade
  • Honey, brown sugar syrup, or other sweet taste
  • High heat safflower oil

Lightly wash and pat dry the oyster mushrooms; they need to be very dry to absorb the marinade. Shallots as we used them are orangey garlicky looking things; chop up one or one half shallot clove into very tiny bits. Add the mushroom pieces and shallots to a bowl and drench with enough soy sauce to either cover them or mostly cover them so you can repeatedly drizzle the sauce over with them with a spoon (we did the latter). Add a small amount of honey, or more if you want a sweeter taste. (The original poster recommended other optional things like liquid smoke, which are out of my cooking league at this time). Add ground pepper. Let marinade for 15-30 minutes or so, until the oyster mushrooms are picking up the color of the marinade. Then we're ready to cook, though we want to time it to finish up with the grits and other stuff.

IMG_20130123_221657.jpg

Take a pan and heat it to high heat with a dollop of high heat safflower oil. Add the mushroom discs, flat side down; don't stir around too much or you will "disturb the sear". After a minute, flip the mushrooms and let them sear again. Then add the rest of the sauce. This is the point where big boys and girls who read Cook's Illustrated can add a splash of white wine or liquid smoke to get a more complex flavor.

"Next is the most important part to good scallop mushrooms" says Kathy of the King Oyster Mushroom Vegan Scallops recipe, and she ain't lying. Turn the heat down to medium high and tilt the pan so the juices and 'shrooms pool together and use a spoon to lift and pour the juices over the mushrooms. The heat will continue to evaporate the sauce; I guess this is what big boys and girls who read Cook's Illustrated call a reduction, but I just call it thirty one flavors of delicious. Keep doing this until the sauce is almost gone and the "scallops" are nice and dark and cooked.

Oh, on timing, you want to do this whole step almost last. I'll get to that in the next sections.

Grits

Get a package of grits, boil water, add grits. Follow the package. Butter and such are not necessary; you'll have topping. Oh wait, thanks to a blog post crash I can now ask my wife what she did. Ingredients:

  • 1 cup white corn grits
  • 3 cups water

We did old fashion grits rather than instant grits, probably because I was at Whole Foods but it turned out to work out really well for us. Follow directions, but basically, it's boil the water, add the grits, turn the heat down. It depends on the grits you get.

Topping for Vegan Shrimp and Grits

When I have shrimp and grits at a place like Nola, the grits are usually drizzled in some kind of barbecue sauce and/or the shrimp and grits are drizzled in some combination of barbecue sauce and a salsa like topping. When I looked online, I found a recipe which seemed similar, Vegan "Shrimp" and Grits, but it was tofu based. From it and my leftover ingredients I improvised the following topping. Ingredients:

  • the rest of that shallot you used earlier, or a similar oniony thing
  • 2 cloves garlic
  • 1/4 white onion
  • Olive oil
  • Soy sauce
  • Salsa

Chop up the shallot really fine. Do the same thing to the garlic. Dice the onion. In a saucepan, add olive oil at medium high heat, then add your onion mixture. Cook until the onions are golden brown or at least translucent. Then add the soy sauce and flinch back from the spray of oil. The boiling liquid will cook the onions the rest of the way (seriously, I'm not joking around about the soy and the flinching; I do this for other dishes too and it's a perfectly legitimate if messy cooking technique). Add the salsa once the mixture has started to reduce. We went heavy on the salsa and reduced it until it was thick, but in hindsight when I eat this the sauce is often runny enough to pour out over the grits and give them a good flavor. You might achieve this with more soy or Worcestershire sauce.

On timing, I actually did the king oyster marinade first, then used the shallot while making this, then came back to finish off the sautéing of the "scallops." So now's a good time to talk about how to put this all together.

Practically Vegan Scallops and Grits

Prepare your salad first and set it aside. Cut your bread and set it aside, preparing to toast it at the last minute. If you're making vodka mango smoothies to go along with your meal (recipe not shown), do that in advance too. Start the cauliflower. Chop and prepare the marinade. Start the grits. Chop and prepare the topping. Around this time(ish) take the grits off. Pour the finished topping off. Sauté your mushrooms. When they're almost done, start toasting your bread. Take the mushrooms off. Scoop grits into your bowl. Scoop scallops on top of the grits. Scoop cauliflower on top of the grits. Add your topping to taste. Add your toast pieces. Make sure you have forks, knives, salt, pepper, and optionally Tabasco.

Bring out your salad bowls along with your bowls of vegan grits, serve, eat, and bliss out.

The yums. Definitely doing this again.

-the Centaur

Afterword: Why "Practically" Vegan?

Well, my wife feels like she's about done with animal products, except for eggs that she can verify the source of from local farms, or the occasional cheese or daily product as part of a vegetarian meal at our favorite local restaurant, but I'm not vegan, and I didn't take any special steps to make sure the ingredients for these recipes were vegan.

So the practical upshot is, I can't guarantee these recipes are vegan. The source recipes are vegan and we're pretty sure all the ingredients were plant products, but I am not a vegan, and I can't guarantee that some non-vegan items didn't slip their way in there.

After-afterword: What's up with this blogpost?

If you read it earlier, and it was weirdly truncated, it was because of some weird interaction between WordPress, Ecto, and me slamming my laptop lid when I thought Ecto had finished uploading my blog entry. The downside is I had to rewrite half of it and it's much less funny when I'm not typing at 500 words a minute trying to finish before Coupa Cafe closes. The upside is my wife was behind me while I typed, cooking another iteration of this meal for her late-night dinner, and she filled in a lot of the things that I missed and corrected some things I got wrong.

Humans are Good Enough to Live

centaur 0

goodness.png
I'm a big fan of Ayn Rand and her philosophy of Objectivism. Even though there are many elements of her philosophy which are naive, or oversimplified, or just plain ignorant, the foundation of her thought is good: we live in exactly one shared world which has a definitive nature, and the good is defined by things which promote the life of human individuals.

It's hard to overestimate the importance of this move, this Randian answer to the age old question of how to get from "is" to "ought" - how to go from what we know about the world to be true to deciding what we should do. In Rand's world, ethical judgments are judgments made by humans about human actions - so the ethical good must be things that promote human life.

This may seem like a trivial philosophical point, but there are many theoretically possible definitions of ethics, from the logically absurd "all actions taken on Tuesday are good" to the logically indefensible "things are good because some authority said so." Rand's formulation of ethics echoes Jesus's claim that goodness is not found in the foods you eat, but in the actions you do.

But sometimes it seems like the world's a very depressing place. Jesus taught that everyone is capable of evil. Rand herself thought nothing is given to humans automatically: they must choose their values, and that the average human, because they never think about values, is pretty much a mess of contradictory assumptions which leaves them doing good only through luck.

But, I realized Rand's wrong about that - because her assumptions are wrong, that nothing is given to humans automatically. She's a philosopher, not a scientist, and she wasn't aware of the great strides that have been made in the understanding of how we think - because some of those strides were made in technical fields near the very end of her life.

Rant rails against philosophies like Kant's, who proposes, among many other things, that humans perceive reality unavoidably distorted by filters built into the human conceptual and perceptual apparatus. Rand admitted that human perception and cognition had a nature, but she believed, humans could perceive reality more objectively. Well, in a sense, they're both wrong.

Modern studies of bias in machine learning show that it's impossible - mathematically impossible - to learn any abstract concept without some kind of bias. In brief, if you want to predict something you've never seen before, you have to take some stance towards the data you've seen already - a bias - but there is no logical way to pick a correct bias. Any one you pick may be wrong.

So, like Kant suggested, our human conceptual processes impose unavoidable biases on the kind of concepts we learn, and unlike Rand wanted, those biases may prove distorting. However, we are capable of virtual levels of processing, which means that even if our base reasoning is flawed, we can build a more formal one, like mathematics, that avoids those problems.

But, I realized, there's an even stronger reason to believe that things aren't as bad as Kant or Rand feared, a reason founded in Rand's ideas of ethics. Even human communities that lack a formalized philosophy are nonetheless capable of building and maintaining systems that last for generations - which means the human default bias leads to concepts that are Randian goods.

In a way, this isn't surprising. From an evolutionary perspective, if any creature inherited a set of bad biases, it would learn bad concepts, and be unable to reproduce. From a cognitive science perspective, the human mind is constantly attempting to understand the world and to cache the results as automatic responses - what Rand would call building a philosophy.

So, if we are descendants of creatures that survived, we must have a basic bias for learning that promotes our life, and if we live by being rational creatures constantly attempting to understand the world who persist in communities that have lasted for generations, we must have a basic bias towards a philosophy which is just good enough to prevent our destruction.

That's not to say that the average human being, on their own, without self-examination, will develop a philosophy that Rand or Jesus would approve of. And it's not to say that individual human beings aren't capable of great evil - and that human communities aren't capable of greater evil towards their members.

But it does mean that humans are good enough to live on this Earth.

Just our continued existence shows that even though it seems like we live in a cold and cruel universe, the cards are stacked just enough in humanity's favor for it to be possible for at least some people to thrive, it also shows that while humans are capable of great evil, the bias of humanity is stacked just enough in our favor for human existence to continue.

Rising above the average, of course, is up to you.

-the Centaur

My New Year’s Gift To You: A Mulligan

centaur 0

mulligan-01-v1.png

If you're not one of those people who gives yourself too much to do, this post may not be for you.

For the rest of us, with goals and dreams and drive, do you ever feel like you've got too much to do? I'm not talking about wanting more hours in the day, which we all do, but simply having too many things to do ... period. That sense that, even if you had a magic genie willing to give you endless hours, you'd never get everything you wanted to do done.

todolist.png

To keep track of stuff, I use a Hipster PDA, enterprise edition - 8.5x11 sheets of paper, folded on their long axis, with TODO items written on them and bills and such carried within the folder. Each todo has a little box next to it that I can check off, and periodically I copy items from a half-filled sheet to a new sheet, reprioritizing as I go.

But I'm a pack rat, so I keep a lot of my old TODO lists, organized in a file. Sometimes the TODO sheets get saved for other reasons - for example, the sheets are good headers for stacks of papers and notes related to a project. As projects get completed, I come across these old sheets, and have the opportunity to review what I once thought I had to do.

And you know what? Most of the things that you think you need to do are completely worthless. They're ideas that have relevance at the time, that may seem pressing at the time, but are really cover-your-ass responses to possibilities that never came to pass. The situation loomed, came, and then passed you by ... and should take your TODOs with it.

gabbysleeps.png

I'm not saying you shouldn't have things on your TODO list. I'm planning my 2013 right now. And I'm not saying you should give yourself a pass on obligations you've incurred to others. But I am saying you don't need to maintain every commitments you've ever made to yourself, especially those that came in the form of a TODO list item or a personal challenge.

As an example, a thing I do is take pictures of food and post it to my Google+ stream. Originally I was doing this as preparation for doing restaurant reviews, but I found I actually like the images of food more than I wanted to spend time writing reviews, especially since I have so much more writing to do. But when I get busy, I'll take more pictures than I post. I get a backlog.

So how much effort should I take going back to post the pictures? None is one good answer, but that begs the question to be asked: why are you taking the pictures in the first place? Periodically is another good answer, but it's actually difficult to figure out what I've posted and what I haven't. So hunting through my image feeds can become its own form of archaeology.

vegetarianplate.png

But you know what? The world won't come to an end if I don't post every picture I've ever taken of one of my favorite dishes at my favorite restaurants. If you're not obsessive-compulsive, you may not understand this, but the thought of something you said you were going to do that isn't getting done is an awful torment to those of us who are.

That's where a mulligan comes in. In the competitive collectible card game Magic: The Gathering, players compose decks of cards which they use in duels with other players - but no matter how well a player has prepared his or her deck of cards, success in depends a good initial hand of cards. The best deck in the world can be useless if you draw seven "lands" - or none.

So the game allows you to "mulligan" - to discard that initial hand and re-draw with one less card. That's a slight disadvantage, but a hand with no "lands" is useless - you can't do anything on the first round, and your opponent will clean your clock. Better to have a balanced hand of six cards than seven you can't do anything with at all. Better to have at least a chance to win.

anewpath.png

So that's my gift to you all this New Year's Eve: declare yourself a mulligan. Maybe the turn of the seasons are just a notch on the clock, but use this passage as a point of inspiration. It's a new year, a new day, the starting point of a new path. Remind yourself of your real goals, and throw away any out of date TODOs and collected personal obligations that are holding you back.

Hug your wife, pay your bills, feed your cats. Write the software that pays the bills, and the books that you plan to do.

But don't let yourself get held back something you wrote a year ago on a piece of paper.

Not for one minute.


PANO_20121024_152819.jpg

If you let yourself, the sky is your limit.

-the Centaur

Why Bipartisanship is Dead

centaur 0

fadedflag2.png

Ever feel like bipartisanship is dead and the two parties can't agree on anything? Well, there's a reason for that: even if they agree, they can't pass anything. The House of Representatives has a rule which says the only bills that can be brought to the House floor are ones approved of by the majority of the majority party:

http://www.npr.org/blogs/itsallpolitics/2012/12/30/168309508/fiscal-cliff-debate-why-the-very-few-rule-the-many-in-congress

But having enough votes is not enough. In fact, it is likely the package will not even be brought to the floor for debate and a vote. How can this be? Even if a majority of the whole House (Republicans and Democrats) were prepared to swallow the Senate deal, they won't get a chance unless Speaker John Boehner brings it to the floor. And Boehner probably won't. He has adopted a rule that no measure will be voted on unless it is supported by a majority of the majority party — that is, his party, the Republicans.

Now, I understand that there are many people, particularly on the right, who believe the job of politics is not to get good things done, but to prevent the government from doing bad things. So this kind of stalemate may seem appropriate. But for people on the left and right who just want to get to consensus, find a solution and move on, it seems crazy.

Even if John Boehner, Speaker of the House, came to agreement with President Barack Obama about the latest crisis, even if a overwhelming majority of the House and the Senate agreed with him, a minority of House representatives could prevent a deal from being reached. The Senate is in the same state: if a single senator filibusters a bill, it takes a supermajority of senators to break it - essentially, again blocking the country's progress based on a minority.

I strongly believe in the rights of the minority. I used to say "the majority is always wrong". But I've come to understand partisans, who put allegiance to their party over the good of the country, are almost always more wrong than the majority. Three procedural rules make partisans a grave danger to our republic: closed political primaries (so only partisans can be nominated by their parties), the House majority of the majority rule, and Senate filibusters.

Time to end all three of these, so we can move forward on things a majority of the country can agree on.

-the Centaur

A Really Good Question

centaur 0

layout.png

Recently I was driving to work and thinking about an essay by a statistician on “dropping the stick.” The metaphor was about a game of pick-up hockey, where an inattentive player would be asked to “drop the stick” and skate for a while until they got their head in the game. In the statistical context, this became the action of stopping people who were asking for help with a specific statistical task and asking what problem they wanted to solve, because often solving the actual problem may be actually very different from fixing their technical issue and may require completely different approaches. That gets annoying sometimes when you ask a question to a mailing list and someone asks you what you're trying to solve rather than addressing the issue you've raised, but it's a good reflex to have: first ask, "What's the problem?"

Then I realized something even more important about projects that succeeded or failed in my life – successes at radical off the wall projects like the emotional robot pet project or the cell phone robots with personalities project or the 3d object visualization project, and failures at seemingly simpler problems like a tweak to a planner at Carnegie Mellon or a test domain for my thesis project or the failed search improvement I worked on during my third year at the Search Engine that Starts with a G. One of the things I noticed about the successes is that before I got started I did a hard core intensive research effort to understand the problem space before I tackled the problem proper, then I chose a method of approach, and then I planned out a solution. Paraphrasing Eisenhower, even though the plan often had to change once we started execution, the planning was indispensable. The day-to-day immersion in the problem that you need for planning provides the mental context you need to make the right decisions as the situation inevitably changes.

In failed projects, I found one or more things – the hard core research or the planning – wasn’t present, but that wasn’t all that was missing. In the failure cases, I often didn’t know what a solution would look like. I recently saw this from the outside when I conducted a job interview, and found that the interviewee clearly didn't understand what would constitute an answer to my question. He had knowledge, and he was trying, but his suggested moves were only analogically correct - they sounded like elements of a solution, but didn't connect to the actual features of the problem. Thinking back, a case that leapt to mind from my own experience was a project all the way back in grade school, where I we had an urban planning exercise to create an ideal city. My job was to create the map of the city, and I took the problem very literally, starting with a topographical map of the city's center, river and hills. Now, it's true that the geography of a city is important - for an ideal city, you'd want a source of water, easy transport, a relatively flat area for many buildings, and at least one high point for scenic vistas. But there was one big problem with my city plan: there were no buildings, neighborhoods, or districts on it! No buildings or people! It was just the land!

Ok, so I was in grade school, and this was one of my first projects, so perhaps I could be excused for not knowing what I was doing. But the educators who set up this project knew what they were doing, and they brought on board an actual city planner to talk to us about our project. When he saw my maps, he pointed out this wasn't a city plan and sat down with all of us to brainstorm what we'd actually want in a city - neighborhoods, power plants, a city center, museums, libraries, hospitals, food distribution and industrial regions. At the time, I was saddened that my hard work was abandoned, and now in hindsight I'm saddened that the city planner didn't take a minute or two to talk about how geography affects cities before beginning his brainstorming exercise. But what struck me most about this in hindsight is that I really didn't know what constituted an answer to the problem.

suddenclarity.png  

So, I asked myself, “What counts as a solution to this problem?” – and that, I realized, is a very good question.

-the Centaur

Pictured: an overhead shot of a diorama of the control room of the ENIAC computer as seen at the Computer History Museum, and of course our friend Clarence having his sudden moment of clarity.