Press "Enter" to skip to content

39 search results for “better writer”

Visualizing Cellular Automata

centaur 0


cellular-automata-v1.png

SO, why's an urban fantasy author digging into the guts of Mathematica trying to reverse-engineer how Stephen Wolfram drew the diagrams of cellular automata in his book A New Kind of Science? Well, one of my favorite characters to write about is the precocious teenage weretiger Cinnamon Frost, who at first glance was a dirty little street cat until she blossomed into a mathematical genius when watered with just the right amount of motherly love. My training as a writer was in hard science fiction, so even if I'm writing about implausible fictions like teenage weretigers, I want the things that are real - like the mathematics she develops - to be right. So I'm working on a new kind of math behind the discoveries of my little fictional genius, but I'm not the youngest winner of the Hilbert Prize, so I need tools to help simulate her thought process.

And my thought process relies on visualizations, so I thought, hey, why don't I build on whatever Stephen Wolfram did in his groundbreaking tome A New Kind of Science, which is filled to its horse-choking brim with handsome diagrams of cellular automata, their rules, and the pictures generated by their evolution? After all, it only took him something like ten years to write the book ... how hard could it be?

Deconstructing the Code from A New Kind of Science, Chapter 2

Fortunately Stephen Wolfram provides at least some of the code that he used for creating the diagrams in A New Kind of Science. He's got the code available for download on the book's website, wolframscience.com, but a large subset is in the extensive endnotes for his book (which, densely printed and almost 350 pages long, could probably constitute a book in their own right). I'm going to reproduce that code here, as I assume it's short enough to fall under fair use, and for the half-dozen functions we've got here any attempt to reverse-engineer it would end up just recreating essentially the same functions with slightly different names.
Cellular automata are systems that take patterns and evolve them according to simple rules. The most basic cellular automata operate on lists of bits - strings of cells which can be "on" or "off" or alternately "live" or "dead," "true" and "false," or just "1" and "0" - and it's easiest to show off how they behave if you start with a long string of cells which are "off" with the very center cell being "on," so you can easily see how a single live cell evolves. And Wolfram's first function gives us just that, a list filled with dead cells represented by 0 with a live cell represented by 1 in its very center:

In[1]:= CenterList[n_Integer] := ReplacePart[Table[0, {n}], 1, Ceiling[n/2]]


In[2]:= CenterList[10]
Out[2]= {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}


One could imagine a cellular automata which updated each cell just based on its contents, but that would be really boring as each cell would be effectively independent. So Wolfram looks at what he calls "elementary automata" which update each cell based on their neighbors. Counting the cell itself, that's a row of three cells, and there are eight possible combinations of live and dead neighbors of three elements - and only two possible values that can be set for each new element, live or dead. Wolfram had a brain flash to list the eight possible combinations the same each way every time, so all you have are that list of eight values of "live" or "dead" - or 1's and 0's, and since a list of 1's and 0's is just a binary number, that enabled Wolfram to represent each elementary automata rule as a number:

In[3]:= ElementaryRule[num_Integer] := IntegerDigits[num, 2, 8]

In[4]:= ElementaryRule[30]
Out[4]= {0, 0, 0, 1, 1, 1, 1, 0}


Once you have that number, building code to apply the rule is easy. The input data is already a string of 1's and 0's, so Wolfram's rule for updating a list of cells basically involves shifting ("rotating") the list left and right, adding up the values of these three neighbors according to base 2 notation, and then looking up the value in the rule. Wolfram created Mathematica in part to help him research cellular automata, so the code to do this is deceptively simple…

In[5]:= CAStep[rule_List, a_List] :=
rule[[8 - (RotateLeft[a] + 2 (a + 2 RotateRight[a]))]]


... a “RotateLeft” and a “RotateRight” with some addition and multiplication to get the base 2 index into the rule. The code to apply this again and again to a list to get the history of a cellular automata over time is also simple:

In[6]:= CAEvolveList[rule_, init_List, t_Integer] :=
NestList[CAStep[rule, #] &, init, t]


Now we're ready to create the graphics for the evolution of Wolfram's "rule 30," the very simple rule which shows highly complex and irregular behavior, a discovery which Wolfram calls "the single most surprising scientific discovery [he has] ever made." Wow. Let's spin it up for a whirl and see what we get!

In[7]:= CAGraphics[history_List] :=
Graphics[Raster[1 - Reverse[history]], AspectRatio -> Automatic]


In[8]:= Show[CAGraphics[CAEvolveList[ElementaryRule[30], CenterList[103], 50]]]
Out[8]=

rule-30-evolution.png



Uh - oh. The "Raster" code that Wolfram provides is the code to create the large images of cellular automata, not the sexy graphics that show the detailed evolution of the rules. And reading between the lines of Wolfram's end notes, he started his work in FrameMaker before Mathematica was ready to be his full publishing platform, with a complex build process producing the output - so there's no guarantee that clean simple Mathematica code even exists for some of those early diagrams.

Guess we'll have to create our own.

Visualizing Cellular Automata in the Small

The cellular automata diagrams that Wolfram uses have boxes with thin lines, rather than just a raster image with 1's and 0's represented by borderless boxes. They're particularly appealing because the lines are white between black boxes and black between white boxes, which makes the structures very easy to see. After some digging, I found that, naturally, a Mathematica function to create those box diagrams does exist, and it's called ArrayPlot, with the Mesh option set to True:

In[9]:= ArrayPlot[Table[Mod[i + j, 2], {i, 0, 3}, {j, 0, 3}], Mesh -> True]
Out[9]=

checkerboard.png


While we could just use ArrayPlot, it' s important when developing software to encapsulate our knowledge as much as possible, so we'll create a function CAGridGraphics (following the way Wolfram named his functions) that encapsulates the knowledge of turning the Mesh option to True. If later we decide there's a better representation, we can just update CAMeshGraphics, rather than hunting down every use of ArrayPlot. This function gives us this:

In[10]:= CAMeshGraphics[matrix_List] :=
ArrayPlot[matrix, Mesh -> True, ImageSize -> Large]


In[11]:= CAMeshGraphics[{CenterList[10], CenterList[10]}]
Out[11]=

lines-of-boxes.png


Now, Wolfram has these great diagrams to help visualize cellular automata rules which show the neighbors up top and the output value at bottom, with a space between them. The GraphicsGrid does what we want here, except it by its nature resizes all the graphics to fill each available box. I'm sure there's a clever way to do this, but I don't know Mathematica well enough to find it, so I'm going to go back on what I just said earlier, break out the options on ArrayPlot, and tell the boxes to be the size I want:

In[20]:= CATransitionGraphics[rule_List] :=
GraphicsGrid[
Transpose[{Map[
   ArrayPlot[{#}, Mesh -> True, ImageSize -> {20 Length[#], 20}] &, rule]}]]


That works reasonably well; here' s an example rule, where three live neighbors in a row kills the center cell :

In[21]:= CATransitionGraphics[{{1, 1, 1}, {0}}]
Out[21]=

Screenshot 2016-01-03 14.19.21.png  

Now we need the pattern of digits that Wolfram uses to represent his neighbor patterns. Looking at the diagrams and sfter some digging in the code, it seems like these digits are simply listed in reverse counting order - that is, for 3 cells, we count down from 2^3 - 1 to 0, represented as binary digits.

In[22]:= CANeighborPattern[num_Integer] :=
Table[IntegerDigits[i, 2, num], {i, 2^num - 1, 0, -1}]


In[23]:= CANeighborPattern[3]
Out[23]= {{1, 1, 1}, {1, 1, 0}, {1, 0, 1}, {1, 0, 0}, {0, 1, 1}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}


Stay with me - that only gets us the first row of the CATransitionGraphics; to get the next row, we need to apply a rule to that pattern and take the center cell:

In[24]:= CARuleCenterElement[rule_List, pattern_List] :=
CAStep[rule, pattern][[Floor[Length[pattern]/2]]]


In[25]:= CARuleCenterElement[ElementaryRule[30], {0, 1, 0}]
Out[25]= 1


With all this, we can now generate the pattern of 1' s and 0' s that represent the transitions for a single rule:

In[26]:= CARulePattern[rule_List] :=
Map[{#, {CARuleCenterElement[rule, #]}} &, CANeighborPattern[3]]

In[27]:= CARulePattern[ElementaryRule[30]]
Out[27]= {{{1, 1, 1}, {0}}, {{1, 1, 0}, {1}}, {{1, 0, 1}, {0}}, {{1, 0, 0}, {1}}, {{0,
   1, 1}, {0}}, {{0, 1, 0}, {1}}, {{0, 0, 1}, {1}}, {{0, 0, 0}, {0}}}


Now we can turn it into graphics, putting it into another GraphicsGrid, this time with a Frame.

In[28]:= CARuleGraphics[rule_List] :=
GraphicsGrid[{Map[CATransitionGraphics[#] &, CARulePattern[rule]]},
Frame -> All]


In[29]:= CARuleGraphics[ElementaryRule[30]]
Out[29]=

Screenshot 2016-01-03 14.13.52.png

At last! We' ve got the beautiful transition diagrams that Wolfram has in his book. And we want to apply it to a row with a single cell:

In[30]:= CAMeshGraphics[{CenterList[43]}]
Out[30]=

Screenshot 2016-01-03 14.13.59.png

What does that look like? Well, we once again take our CAEvolveList function from before, but rather than formatting it with Raster, we format it with our CAMeshGraphics:

In[31]:= CAMeshGraphics[CAEvolveList[ElementaryRule[30], CenterList[43], 20]]
Out[31]=

Screenshot 2016-01-03 14.14.26.png

And now we' ve got all the parts of the graphics which appear in the initial diagram of this page. Just to work it out a bit further, let’s write a single function to put all the graphics together, and try it out on rule 110, the rule which Wolfram discovered could effectively simulate any possible program, making it effectively a universal computer:

In[22]:= CAApplicationGraphics[rule_Integer, size_Integer] := Column[
{CAMeshGraphics[{CenterList[size]}],
   CARuleGraphics[ElementaryRule[rule]],
   CAMeshGraphics[
CAEvolveList[ElementaryRule[rule], CenterList[size],
   Floor[size/2] - 1]]},
Center]

In[23]:= CAApplicationGraphics[110, 43]
Out[23]=


Screenshot 2016-01-03 14.14.47.png

It doesn' t come out quite the way it did in Photoshop, but we' re getting close. Further learning of the rules of Mathematica graphics will probably help me, but that's neither here nor there. We've got a set of tools for displaying diagrams, which we can craft into what we need.

Which happens to be a non-standard number system unfolding itself into hyperbolic space, God help me.

Wish me luck.

-the Centaur

P.S. While I' m going to do a standard blogpost on this, I' m also going to try creating a Mathematica Computable Document Format (.cdf) for your perusal. Wish me luck again - it's my first one of these things.

P.P.S. I think it' s worthwhile to point out that while the tools I just built help visualize the application of a rule in the small …

In[24]:= CAApplicationGraphics[105, 53]
Out[24]=

Screenshot 2016-01-03 14.14.58.png

... the tools Wolfram built help visualize rules in the very, very large:

In[25]:= Show[CAGraphics[CAEvolveList[ElementaryRule[105], CenterList[10003], 5000]]]

Out[25]=

rule-105-a-lot.png

That's 10,000 times bigger - 100 times bigger in each direction - and Mathematica executes and displays it flawlessly.

The Future of Books is Bright

centaur 0

20150528_182121.jpg

Some time ago my good friend Jim Davies said, "If I was a traditional publisher or bookstore owner, I'd be very worried about my business with the rise of ebooks" - and he's right. While the demise of the bookstore Borders may be more properly laid to the feet of Walmart and Costco than Kindle and Kobo, ebooks have disrupted the traditional publishing industry. Once you had to, like, go to a place and shell money to get a thick tome; now you can pull books out of the air into a wedge of magic in your pocket, sometimes for free. If I owned a publishing company or bookstore, I'd be worried: the number of people who buy traditional books is dropping, and from Borders to Borderlands to Bookbuyers to Keplers, bookstores are in trouble.

But are books? At the time I interpreted what Jim said as indicating the demise of books, but he didn't say that at all: he just pointed out the existential threat a business faces if two thirds or half or even just a third of its customer base disappears. A ten percent drop in a business's sales might mean the difference between smiles and Christmas bonuses all around and a death spiral that five years later closes the business's doors as prices inexorably rise and profit margins plummet. My fear was, as ebook readers get better and better and physical book purchasers got fewer and fewer, that the economies of scale would not favor book publishing. I had imagined that as fewer and fewer people bought books, the unit cost would go up, it would no longer be profitable to print books, and both books and bookstores would go away.

Now that I've helped found a small press, I've learned the economics don't work that way.

Once I thought that Barnes and Noble and similar stores would shift to an on-demand model, with shelves filled with single copies of books and with book printing machines behind the counter, running your order for your chosen edition while you got a cappuccino in the bookstore's Starbucks, and, hey, maybe that will happen. But one thing I didn't anticipate was the ability for print on demand distributors to create an effective and useful FedEx-like just in time model, where books are printed essentially as they're needed, rather than enormous stocks being kept on hand - and the other thing I didn't anticipate was applying paper arts to book production to create a new category of books as art, encouraging a bite-sized reading model and a love of the physically printed word. Now, I don't know the details of Amazon's or Barnes and Noble's warehousing model. I do know that most of the books you see above were printed just in time for a recent event, and all of them represent departures from the traditional publishing model.

Some people have argued that we’ve hit the bottom of the bookstore market and it is getting better; it isn’t clear whether Barnes and Noble will survive, but local bookstores are having a comeback - but it’s not hard to look at the march of technology and to assume that things are going to HAVE to change. We no longer print books on scrolls, or parchment; the printing press disrupted the illustrated books model, and online news sources have dealt a serious blow to the newspaper industry - I wish I had a picture of all the newspaper boxes in Mountain View; there are a dozen of them at two or three places, and they don’t have any real newspapers in them anymore, just free magazines. This industry has collapsed radically within the last few years, and it’s hard not to think the same thing will happen to books as e-readers get better and better.

But technological updates are not always replacements. Phone screens are not a replacement for watching TV, and TV is not a replacement for movie theaters. I’d argue that more movies are watched on cell phones than at any time in history, and yet the most recent Star Wars movie has made something like a billion dollars from people going to an actual darkened room to watch the movie with friends and a bucket of popcorn. Similarly, movie theaters are not a replacement for actual theaters, plays performed with real humans in front of a live audience: even though movies have largely displaced plays, they haven’t displaced them completely. Perhaps one day they will, if only in the sense of being able to expose a wider audience to that of a play; but the experience you have watching a real human playing a role right in front of you is completely different than the experience of film.

The same thing is true of books. Sorry, e-reader folks: your interfaces are a joke. The contrast is poor, scrolling is slow, you can’t easily make notes or create bookmarks or - oh, I’m sorry, are you about to say that your bad low resolution stylus and awkward commenting interface and hard-to-discover notes and general lameness are somehow a replacement for flipping through a book, tossing in a piece of paper, and writing a brief note? Oh, go on, try it. I’ll write an essay before you’re done figuring out how to leave a comment. The point isn’t that it isn’t technologically impossible to solve this problem - it’s that right now, the people who make e-books aren’t even trying. They’re trying to increase contrast and resolution and battery life and page refresh rates and e-book distribution. The things I want out of books - that tactile sense, rapid note taking, rapid access, discoverability, the ability to stack a set of them in a pile as a reminder - are literally twenty or thirty years away. E-readers are, technologically, at the days of vector graphics, when real books provide you a tactile feel and a random access interface that’s superior to the best 3D TV.

One day they’ll get there. And one might assume that those awesome e-readers of the future, with all the books in history on them, in sharp color, with a fast random access - I imagine something that looks actually like a large paperback book, with a hundred or so flexible pages, all in glorious color that you can flip through, mark up, whatever, except you only have to carry one book - will kill traditional bookstores. But then I go into Barnes and Noble and see a section of vinyl records and go what the hell? There’s no way that you could have told me ten years ago that we’d be in a world where we’re not just likely to move past CD’s, but to move past iPods with local storage in favor of streaming, but that at the same time vinyl is having a resurgence. Supposedly this is because DJ’s like to scratch records, and audiophiles prefer the analog sound. Who knew?

And yet, at the same time, the production of books themselves is getting better and better. They’re being printed on better paper, with better typography, better book design, color covers, printed and embossed covers, the whole nine yards. As a publisher, I’ve been going around collecting new examples of awesomely printed books and just in the ten or so years I’ve been looking at this really closely the entire production process of books has become stellar and awesome. Sometimes I’m sad when I get an old book on a topic I like and open it up to find pages that look like they’re typed up on a typewriter. Back in the late 70’s, when Douglas Hofstadter published Gödel, Escher, Bach, it was possible to produce awesome books with awesome typesetting, but it was an epic struggle; Donald Knuth reportedly spent eight years developing TeX to help him produce The Art of Computer Programming. Now these tools are available to everyone with a computer - I’m a Word junkie, but even I recently downloaded MacTex to my computer while sitting in an internet cafe. Now anyone can produce something that’s truly awesome and get it printed on demand.

SO I can’t see the future of books being anything but bright. Physical books are going to be around forever, at least as a niche product, and possibly more; they’re getting better all the time - but if they get replaced, it’s going to be by something even better, and even if they do get replaced en masse by something awesome, there will always be people who will love and preserve the printed medium forever, bibliophiles motivated by the same love as theatergoers, audiophiles, and lovers of fine art.

-the Centaur

An Outrage, But Hardly a Surprise

centaur 0

NhXQqR54nB8FNqCbC3aR7y-tzcU2OALTjtp-I6dVXt7m=w1137-h640-no.jpg

Recently one of my friends in the Treehouse Writers' group alerted me to the article "Sexism in publishing: my novel wasn't the problem, it was me, Catherine" in the Guardian. You should read it, but the punchline:

In an essay for Jezebel, Nichols reveals how after she sent out her novel to 50 agents, she received just two manuscript requests. But when she set up a new email address under a male name, and submitted the same covering letter and pages to 50 agents, it was requested 17 times.

“He is eight and a half times better than me at writing the same book. Fully a third of the agents who saw his query wanted to see more, where my numbers never did shift from one in 25,” writes Nichols. “The judgments about my work that had seemed as solid as the walls of my house had turned out to be meaningless. My novel wasn’t the problem, it was me – Catherine.”

Catherine Nichols' original article is up at Jezebel under the title Homme de Plume - go check it out - but the point of raising the article was to gather people's opinions. The exchange went something like this: "Opinions?" "Outrage?"

Yes, it's outrageous, but hardly a surprise. I've heard stories like this again and again from many women writers. (Amusingly, or perhaps horrifyingly, the program I writing this in, Ecto, just spell-corrected "women writers" to "some writers," so perhaps the problem is more pervasive than I thought). Science fiction authors Andre Norton, James Tiptree, Jr., C.J. Cherryh, Paul Ashwell and CL Moore all hid their genders behind male and neutral pseudonyms to help sell their work. Behind the scenes, prejudice against women authors is pervasive - and I'm not even referring to the disparaging opinions of the conscious misogynists who'll freely tell you they don't like fiction written by women, or the discriminatory actions of the unconsciously prejudiced who simply don't buy fiction written by women, but instead calculated discrimination, sometimes on the part of women authors, editors and publishers themselves, who feel the need to hide their gender to make sure their stories sell.

I am a guy, so I've never been faced with the problem of having to choose between acknowledging or suppress my own gender in the face of the prejudices of those who would disparage my existence. (Though I have gotten a slight amount of flak for being a male paranormal romance author, we got around that by calling my work "urban fantasy," which my editor thought was a better description anyway). As a business decision, I respect any woman (or man) who chooses a pseudonym that will better market their work. My friend Trisha Wooldridge edits under Trisha Wooldridge, but writes under T. J. Wooldridge, not because publishers won't buy it, but because her publisher believes some of the young boys to whom her YA is aimed are less likely to read books by female authors. The counterexample might be J. K. Rowling, but even she is listed as J. K. Rowling and not Joanne because her publishers were worried young boys wouldn't buy their books. She's made something like a kabillion dollars under the name J. K. Rowling, so that wasn't a poor business decision (interestingly, Ecto just spell-corrected "decision" to "deception") but we'll never know how well she would have done had the Harry Potter series been published under the name "Joanne Rowling".

And because we'll never know, I feel it's high time that female authors got known for writing under their own names.   

Now, intellectual honesty demands I unload a bit of critical thinking that's nagging at me. In this day and age, when we can't trust anything on the Internet, when real ongoing tragedies are muddled by people writing and publishing fake stories to push what would be otherwise legitimate agendas for which there's already enough real horrific evidence - I'm looking at you, Rolling Stone - we should always get a nagging feeling about this kind of story: a story where someone complains that the system is stacked against them. For example, in Bait and Switch Barbara Ehrenreich tried to expose the perils of job hunting … by lying about her resume, and then writing a book about how surprised she was she didn't get hired by any of the people she was lying to. (Hint, liars, just because it's not socially acceptable to call someone a liar doesn't mean we're not totally on to you - and yes, I mean you, you personally, the individual(s) who are lying to me and thinking they're getting away with it because I smile and nod politely.)

In particular, whenever someone complains that they're having difficulty getting published, there always (or should be) this nagging suspicion in the back of your mind that the problem might be with the material, not the process - according to legend, one SF author who was having trouble getting published once called up Harlan Ellison (yes, THAT Harlan Ellison) and asked why he was having trouble getting published, to which Harlan responded, "Okay, write this down. You ready? You aren't getting published because your stories suck. Got it? Peace out." Actually, Harlan probably didn't say "peace out," and there may have been more curse words or HARSH TONAL INFLECTIONS that I simply can't represent typographically without violating the Peace Treaty of Unicode. So there's this gut reaction that makes us want to say, "so what if someone couldn't get published?"

But, taking her story at face value, what happened with Catherine Nichols was the precise opposite of what happened to Barbara Ehrenreich. When she started lying about her name, which in theory should have made things harder for her … she instead started getting more responses, which makes the prejudice against her seem even stronger. Even the initial situation she was in - getting rejections from over 50 publishers and agents - is something that happens over and over again in the history of publishing … but sooner or later, even the most patient stone is worn away. Legendary writing teacher John Gardner had a similar thought: "The writer sends out, and sends again, and again and again, and the rejections keep coming, whether printed slips or letters, and so at last the moment comes when many a promising writer folds his wings and drops." Or, in Nichols' own words:

To some degree, I was being conditioned like a lab animal against ambition. My book was getting at least a few of those rejections because it was big, not because it was bad. George [her pseudonym], I imagine, would have been getting his “clever”s all along and would be writing something enormous now. In theory, the results of my experiment are vindicating, but I feel furious at having spent so much time in that ridiculous little cage, where so many people with the wrong kind of name are burning out their energy and intelligence. My name—Catherine—sounds as white and as relatively authoritative as any distinctly feminine name could, so I can only assume that changing other ethnic and class markers would have even more striking effects.

So we're crushing women writers … or worse, pre-judging their works. The Jezebel article quotes Norman Mailer:

In 1998, Prose had dubbed bias against women’s writing “gynobibliophobia”, citing Norman Mailer’s comment that “I can only say that the sniffs I get from the ink of the women are always fey, old-hat, Quaintsy Goysy, tiny, too dykily psychotic, crippled, creepish, fashionable, frigid, outer-Baroque, maquillé in mannequin’s whimsy, or else bright and stillborn”.

Now, I don't know what Mailer was sniffing, but now that the quote is free floating, let me just say that if he can cram the ink from Gertrude Stein, Ayn Rand, Virginia Woolf, Jane Austen, Emily Dickinson, Patricia Briggs, Donna Tartt, Agatha Christie, J. K. Rowling and Laurell Hamilton into the same bundle of fey, old-hat smells, he must have a hell of a nose.

But Mailer's quote, which bins an enormous amount of disparate reactions into a single judgment, looks like a textbook example of unconscious bias. As Malcolm Gladwell details in Blink, psychological priming prior to an event can literally change our experience of it: if I give you a drink in a Pepsi can instead of a Coke can, your taste experience will be literally different even if it's the same soda. This seems a bit crazy, unless you change the game a bit further and make the labels Vanilla Pepsi and Coke Zero: you can start to see that how the same soda could seem flat if it lacks an expected flavor, or too sweet if you are expecting an artificial sweetener. These unconscious expectations can lead to a haloing effect, where if you already think someone's a genius, you're more likely to credit them with more genius, when in someone else it may seem eccentricity or arrogance. The only solution to this kind of unconscious bias, according to Gladwell, is to expose yourself to more and more of the unfamiliar stimulus, so that it seems natural, rather than foreign.

So I feel it's high time not only that female authors should feel free to write under their own names, but also that the rest of us should feel free to start reading them.

I'm never going to tell someone not to use a pseudonym. There are a dozen reasons to do it, from business decisions to personal privacy to exploring different personas. There's something weirdly thrilling about Catherine Nichols' description of her male pseudonym, her "homme de plume," whom she imagined “as a sort of reptilian Michael Fassbender-looking guy, drinking whiskey and walking around train yards at night while I did the work.”

But no-one should have to hide their gender just to get published. No-one, man or woman; but since women are having most of the trouble, that's where our society needs to do most of its work. Or, to give (almost) the last word to Catherine:

The agents themselves were both men and women, which is not surprising because bias would hardly have a chance to damage people if it weren’t pervasive. It’s not something a few people do to everyone else. It goes through all the ways we think of ourselves and each other.

So it's something we should all work on. That's your homework, folks: step out of your circle and read something different.

-the Centaur

Pictured: Some art by my wife, Sandi Billingsley, who thinks a lot about male and female personas and the cages we're put in.

Send Out Your Work

centaur 0

Screenshot 2015-06-07 15.41.25.png

Robert Heinlein famously had five rules for writing:

  1. Write.
  2. Finish what you start.
  3. Refrain from rewriting except to editorial order.
  4. Put your story on the market.
  5. Keep it on the market until sold.

with Robert Sawyer's addendum's: #6: "Start working on something else."

Now, like all writing rules, these have limits. Take #3. Some authors write near-finished pieces on a first draft, but most don't. I've done that with a very few short pieces, but most of my pieces are complex enough to require several rewrites. As you get better and better at writing, it becomes easier and easier to produce an acceptable story right off the bat … so see rule #6.

Actually, there's a lot between rules #2 and #4. I revise a story until I feel it is ready to send to an editor … then I send it to beta readers instead, trusted confidants who can deliver honest but constructive criticism. When I feel like I've addressed the comments enough that I want to send it back to the betas, I don't; I send the story out to market instead.

Regardless, some stories won't ever sell. Many writers have a "sock drawer" of their early work (and many markets ask you not send them socks). Trying to read my first Lovecraft pastiche, "Coinage of Cthulhu," causes me a jolt of almost physical pain. Other stories may be of an unusual length or type, and for a long odd-genre story is indeed possible to exhaust all possible markets.

So what should you do with your odd socks? Some authors, like Harlan Ellison, are bold enough to share their very early work; other authors, like Ernest Hemingway, threw away ninety nine pages for each one published. Gertrude Stein reportedly shared her notebooks almost raw; Ayn Rand reportedly rewrote each page of Atlas Shrugged five or six times. So there's no right answer.

But again, it isn't that simple. I recently have been reviewing my work, and while I do have a few stories likely destined for the sock drawer, and a few stories which definitely need revision, there are others that I have never sent out, especially after a low point during graduate school when I got some particularly unhelpful criticism.

Many writers are creatures with delicate, butterfly-like egos … yet you need to develop an elephant's hide. Hemingway once said talking too much about the writers craft could destroy it, literally like brushing the scales off a butterfly's wing; John Gardner said he'd seen far too many promising writers crushed by one too many rejections.

When a good editor (*cough* Debra Dixon, ℅ Bell Bridge Books) hits you with hard criticism on a story, she's not trying to crush your ego: she's trying to tell you that this character isn't fleshed out, or the logic breaks down, or the story is dragging - or moving too fast. But not everyone's a good editor. Not everyone's even a good critic.

I've encountered far too many critics who can't critique constructively: critics who try to be clever by turning legitimate comments into deadly bon mots; critics who try to change the story by questioning your purpose, genre or style, critics who have their own ax to grind, including one who sent me a diatribe about why I should throw out my television.

And there are friendly critics, critics who never say anything bad about your story. Some people would say you should ignore them, but I disagree. First, you need a cheerleader to feed that delicate ego you're sheltering within that elephant's hide; second, if even your ever chipper cheerleader doesn't like a particular story, you better sit up and take notice.

But the stories in my low point weren't like that. Many of them got good internal reviews, and I was happy with them, but they were long, or slipstream, and I couldn't find markets for them. Or I was too tied up with the idea of high-paying SFWA markets. Or, more honestly, I just got busy and short shrifted them. But that opens up the question: how deep into my backlog do I go?

For me, answering these questions usually involves creating an Excel spreadsheet :-) which you see above. Clearly there was a low point in the data where I wasn't submitting anything, and I was going to spin a story of how I got discouraged … but a closer analysis tells a different story.

Story Writing.png

The dates are approximate here, but mapping a sliding window over cumulative submissions, we can see a pattern where I started writing shorts, then had a first sale, followed by a burst of creativity on the heels of that encouragement. After a while, I got more and more discouraged, hitting rock bottom when I stopped sending shorts out at all … but this is only short story data.

Actually, I was working on a novel as well.

Before my first sale, "Sibling Rivalry", I'd written a novel, HOMO CENTAURIS. That burst of creativity of shorts came in graduate school, when I deliberately didn't want to take on another novel-length project. I did get discouraged, but at the same time, I started a novel, DELIVERANCE, and finished another two novels, FROST MOON and BLOOD ROCK.

FROST MOON sold right when my short story writing was picking up again. It feels like I quit, but the evidence shows that I slowly and steadily sold stories both to open markets and to invited anthologies until very recently - and that there are as many stories circulating now as I was selling earlier.

So, maybe some of these will make it. Maybe they won't. But the data shows that feeling discouraged is pointless - my biggest sales came after my longest stretch of doggedly sending stories out. My karate teacher once said that most of your learning is on the plateau - you feel stuck, but in reality you're learning. The data seems to bear that out.

So if I had to redo Heinlein's rules, they'd go something like this:

  1. Write.
  2. Keep writing.
  3. Finish what you start.
  4. Circulate your work to get feedback.
  5. Edit your work to respond to that feedback.
  6. Send your edited work out to the markets.
  7. Don't wait to hear back … start writing something else right away.
  8. Keep circulating your work until sold, or you've exhausted all the markets.
  9. No matter what happens, keep writing.
  10. And never, never, never give up.

Time to practice what I preach …

Screenshot 2015-06-12 20.35.38.png

...and put more stories out on the market.

Screenshot 2015-06-12 22.00.38.png

-the Centaur

P.S. Axually, I'm doing a step not listed above … responding to editorial feedback on CLOCKWORK. Responding to feedback is explicit on Heinlein's list as #3, but an implicit consequence of #8 on mine. If you sell something, listen to your editor, but keep a firm grip on your own vision. That's hard enough it needs its own article.

TWELVE HOURS LATER

centaur 0

Twelve Hours Later-Flierv3.png

I'm super stoked to announce that Jeremiah Willstone, my favorite steampunk heroine and protagonist of my forthcoming novel THE CLOCKWORK TIME MACHINE, will be appearing in two stories in the TWELVE HOURS LATER anthology!

Created by the wonderful folks at the Clockwork Alchemy writer's track, this anthology features twenty four short stories each focusing on a single hour of the day. My two stories are 3AM - "The Hour of the Wolf" - and 3PM - "The Time of Ghosts".

Here's a taste of what happened on Halloween of 1897 … at 3AM, the hour of the wolf:

Jeremiah Willstone ran full tilt down the alley, the clockwork wolf nipping at her heels.


Her weekend had started pleasantly enough: an evening’s liberty from the cloisters of Liberation Academy, a rattling ride into the city on a battered old mechanical caterpillar—and eluding the proctors for a walking tour of Edinburgh with a dish of an underclassman.


Late that night—or, more properly early Halloween morning—the couple had thrown themselves down on the lawn of the park, and his sweet-talk had promised far more than this ersatz picnic of woven candies and braided sweets; but before they’d found a better use for their Victoria blanket … Jeremiah’s eyes got them in trouble.


“Whatever is that?” she asked, sighting a glint running along the edge of the park.


“Just a rat,” Erskine said, proferring her another twisted cinnamon scone.


“Of brass?” Jeremiah asked, sitting up. “With glowing eyes, I note—”

Uh-oh! What have our heroes found? And what will happen later … at 3PM, the time of ghosts?

Half a mile under Edinburgh Castle, lost in a damp warren of ancient masonry lit only by his guttering candle, Navid Singhal-Croft, Dean of Applied Philosophy at Liberation Academy, wished he’d paid more attention to the ghost stories his cadets whispered about the tunnels.


Of course, that was his own fault: he led the college of sciences at the premiere military academy in the Liberated Territories of Victoriana, and he’d always thought it his duty to drum ghost stories out of the young men and women who were his charges, not to memorize them.


Now was the time, but where was the place? A scream echoed in the dark, very close—and eerily familiar. Shielding his candle with one hand, Navid ran through crumbling brick and flickering light, desperate to find his father before the “ghost” claimed another victim.


If he couldn’t rescue his father … Navid might never be born.

DUN DUN DUNNN! What's going to happen? You'll have to buy the anthology to find out!

Stay tuned to find out where to purchase it! I'm assuming that will be "everywhere".

Prevail, Victoriana!

-Anthony

Talent, Incompetence and Other Excuses

centaur 0

lenora at rest in the library with the excelsior

The company I work at is a pretty great place, and it's attracted some pretty great people - so if your name isn't yet on the list of "the Greats" it can sometimes be a little intimidating. There's a running joke that half the people at the firm have Impostor Syndrome, a pernicious condition in which people become convinced they are frauds, despite objective evidence of their competence.

I definitely get that from time to time - not just at the Search Engine That Starts with a G, but previously in my career. In fact, just about as far back as people have been paying me money to do what I do, I've had a tape loop of negative thoughts running through my head, saying, "incompetent … you're incompetent" over and over again.

Until today, as I was walking down the hall, when I thought of Impostor Syndrome, when I thought of what my many very smart friends would say if I said that, when I thought of the response that they would immediately give: not "you're wrong," which they of course might say, but instead "well, what do you think you need to do to do a good job?"

Then, in a brain flash, I realized incompetence is just another excuse people use to justify their own inaction.

Now, I admit there are differences in competence in individuals: some people are better at doing things than others, either because of experience, aptitude, or innate talent (more on that bugbear later). But unless the job is actually overwhelming - unless simply performing the task at all taxes normal human competence, and only the best of the best can succeed - being "incompetent" is simply an excuse not to examine the job, to identify the things that need doing, and to make a plan to do them.

Most people, in my experience, just want to do the things that they want to do - and they want to do their jobs the way they want to do them. If your job is well tuned towards your aptitudes, this is great: you can design a nice, comfortable life.

But often the job you want to do requires more of you than doing things the way you want to do them. I'm a night owl, I enjoy working late, and I often tool in just before my first midmorning meeting - but tomorrow, for a launch review of a product, I'll be showing up at work a couple hours early to make sure that everything is working before the meeting begins. No late night coffee for you.

Doing what's necessary to show up early seems trivial, and obvious, to most people who aren't night owls, but it isn't trivial, or obvious, to most people that they don't do what's necessary in many other areas of their life. The true successes I know, in contrast, do whatever it takes: switching careers, changing their dress, learning new skills - even picking out the right shirts, if they have to meet with people, or spending hours shaving thirty seconds off their compile times, if they have to code software.

Forget individual differences. If you think you're "incompetent" at something, ask yourself: what would a "competent" person do? What does it really take to do that job? If it involves a mental or physical skill you don't have, like rapid mental arithmetic or a ninety-eight mile-per-hour fastball, then cut yourself some slack; but otherwise, figure out what would lead to success in the job, and make sure you do that.

You don't have to do those things, of course: you don't have to put on a business suit and do presentations. But that doesn't mean you're incompetent at giving presentations: it means you weren't willing to go to a business wear store to find the right suit or dress, and it means you weren't willing to go to Toastmasters until you learned to crack your fear of public speaking. With enough effort, you can do those things - if you want to. There's no shame in not wanting to. Just be honest about why.

That goes back to that other bugbear, talent.

When people find out I'm a writer, they often say "oh, it must take so much talent to do that." When I protest that it's really a learned skill, they usually say something a little more honest, "no, no, you're wrong: I don't have the talent to do that." What they really mean, though they may not know it, is that they don't want to put in the ten thousand hours worth of practice to become an expert.

Talent does affect performance. And from a very early age, I had a talent with words: I was reading soon after I started to walk. But, I assure you, if you read the stuff I wrote at an early age, you'd think I didn't have the talent to be a writer. What I did have was a desire to write, which translated into a heck of a lot of practice, which developed, slowly and painfully, into skill.

Talent does affect performance. Those of us who work at something for decades are always envious of those people who seem to take to something in a flash. I've seen it happen in writing, in computer programming, and in music: an experienced toiler is passed by a newbie with a shitload of talent. But even the talented can't go straight from raw talent to expert performance: it still takes hundreds or thousands of hours of practice to turn that talent into a marketable skill.

When people say they don't have talent, they really mean they don't have the desire to do the work. And that's OK. When people say they aren't competent to do a job, they really mean they don't want to think through what it takes to get the job done, or having done so, don't want to do those things. And that's OK too.

Not everyone has to sit in a coffeehouse for thousands of hours working on stories only to find that their best doesn't yet cut it. Not everyone needs to strum on that guitar for thousands of hours working on riffs only to find that their performance falls flat on the stage. Not everyone needs to put on that suit and polish that smile for thousands of hours working on sales only to find that they've lost yet another contract. No-one is making you do those things if you don't want to.

But if you are willing to put those hours in, you have a shot at the best selling story, the tight performance, the killer sale.

And a shot at it is all you get.

-the Centaur

Pictured: Lenora, my cat, in front of a stack of writing notebooks and writing materials, and a model of the Excelsior that I painted by hand. It's actually a pretty shitty paint job. Not because I don't have talent - but because I didn't want to put hundreds of hours in learning how to paint straight lines on a model. I had writing to do.

The Centaur’s Guide to the Game Developers Conference

centaur 1

gdc2013logo.png

Once again it’s time for GDC, the Game Developers Conference. This annual kickstart to my computational creativity is held in the Moscone Center in San Francisco, CA and attracts roughly twenty thousand developers from all over the world.

I’m interested primarily in artificial intelligence for computer games– “Game AI” – and in the past few years they’ve had an AI Summit where game AI programmers can get together to hear neat talks about progress in the field.

Coming from an Academic AI background, what I like about Game AI is that it can’t not work. The AI for a game must work, come hell or high water. It doesn’t need to be principled. It doesn’t need to be real. It can be a random number generator. But it needs to appear to work—it has to affect gameplay, and users have to notice it.

gdc2013aisummit.png

That having been said, there are an enormous number of things getting standard in game artificial intelligence – agents and their properties, actions and decision algorithms, pathfinding and visibility, multiple agent interactions, animation and intent communication, and so forth – and they’re getting better all the time.

I know this is what I’m interested in, so I go to the AI Summit on Monday and Tuesday, some subset of the AI Roundtables, other programming, animation, and tooling talks, and if I can make it, the AI Programmer’s Dinner on Friday night. But if game AI isn’t your bag, what should you do? What should you see?

gdc2013people.png

If you haven’t been before, GDC can be overwhelming. Obviously, try to go to talks that you like, but how do you navigate this enormous complex in downtown San Francisco? I’ve blogged about this before, but it’s worth a refresher. Here are a few tips that I’ve found improve my experience.

Get your stuff done before you arrive. There is a LOT to see at GDC, and every year it seems that a last minute videoconference bleeds over into some talk that I want to see, or some programming task bumps the timeslot I set aside for a blogpost, or a writing task that does the same. Try to get this stuff done before you arrive.

Build a schedule before the conference. You’ll change your mind the day of, but GDC has a great schedule builder that lets you quickly and easily find candidate talks. Use it, email yourself a copy, print one out, save a PDF, whatever. It will help you know where you need to go.

Get a nearby hotel. The 5th and Minna Garage near GDC is very convenient, but driving there, even just in the City, is a pain. GDC hotels are done several months in advance, but if you hunt on Expedia or your favorite aggregator you might find something. Read the reviews carefully and doublecheck with Yelp so you don’t get bedbugs or mugged.

Check in the day before. Stuff starts really early, so if you want to get to early talks, don’t even bother to fly in the same day. I know this seems obvious, but this isn’t a conference that starts at 5pm on the first day with a reception. The first content-filled talks start at 10am on Monday. Challenge mode: you can check in Sunday if you arrive early enough.

mozcafe.png

Leave early, find breakfast. Some people don’t care about food, and there’s snacks onsite. Grab a crossaint and cola, or banana and coffee, or whatever. But if you power-up via a good hot breakfast, there are a number of great places to eat nearby – the splendiferous Mo’z Café and the greasy spoon Mel’s leap to mind, but hey, Yelp. A sea of GDC people will be there, and you’ll have the opportunity to network, peoplewatch, and go through your schedule again, even if you don’t find someone to strike up a conversation with.

Ask people who’ve been before what they recommend. This post got started when I left early, got breakfast at Mo’z, and then let some random dude sit down on the table opposite me because the place was too crowded. He didn’t want to disturb my reading, but we talked anyway, and he admitted: “I’ve never been before? What do I do?” Well, I gave him some advice … and then packaged it up into this blogpost. (And this one.)

Network, network, network. Bring business cards. (I am so bad at this!) Take business cards. Introduce yourself to people (but don’t be pushy). Ask what they’re up to. Even if you are looking for a job, you’re not looking for a job: you want people to get to know you first before you stick your hand out. Even if you’re not really looking for a job, you are really looking for a job, three, five or ten years later. I got hired into the Search Engine that Starts with a G from GDC … and I wasn’t even looking.

Learn, learn, learn. Find talks that look like they may answer questions related to problems that you have in your job. Find talks that look directly related to your job. Find talks that look vaguely related to your job. Comb the Expo floor looking for booths that have information even remotely related to your job. Scour the GDC Bookstore for books on anything interesting – but while you’re here: learn, learn, learn.

gdc2013expofloor.png

Leave early if you want lunch or dinner. If you don’t care about a quiet lunch, or you’ve got a group of friends you want to hang with, or colleagues you need to meet with, or have found some people you want to talk to, go with the flow, and feel comfortable using your 30 minute wait to network. But if you’re a harried, slightly antisocial writer with not enough hours in the day needing to work on his or her writing projects aaa aaa they’re chasing me, then leave about 10 minutes before the lunch or dinner rush to find dinner. Nearby places just off the beaten path like the enormous Chevy’s or the slightly farther ’wichcraft are your friends.

Find groups or parties or events to go to. I usually have an already booked schedule, but there are many evening parties. Roundtables break up with people heading to lunch or dinner. There may be guilds or groups or clubs or societies relating to your particular area; find them, and find out where they meet or dine or party or booze. And then network.

gdc2013roundtables.png

Hit Roundtables in person; hit the GDC Vault for conflicts. There are too many talks to go. Really. You’ll have to make sacrifices. Postmortems on classic games are great talks to go to, but pro tip: the GDC Roundtables, where seasoned pros jam with novices trying to answer their questions, are not generally recorded. All other talks usually end up on the GDC Vault, a collection of online recordings of all past sessions, which is expensive unless you…

Get an All Access Pass. Yes, it is expensive. Maybe your company will pay for it; maybe it won’t. But if you really are interested in game development, it’s totally worth it. Bonus: if you come back from year to year, you can get an Alumni discount if you order early. Double bonus: it comes with a GDC Vault subscription.

gdc2013chevys.png

Don’t Commit to Every Talk. There are too many talks to go to. Really. You’ll have to make sacrifices. Make sure you hit the Expo floor. Make sure you meet with friends. Make sure you make an effort to find some friends. Make time to see some of San Francisco. Don’t wear yourself out: go to as much as you can, then soak the rest of it in. Give yourself a breather. Give yourself an extra ten minutes between talks. Heck, leave a talk if you have to if it isn’t panning out, and find a more interesting one.

Get out of your comfort zone. If you’re a programmer, go to a design talk. If you’re a designer, go to a programming talk. Both of you could probably benefit from sitting in on an audio or animation talk, or to get more details about production. What did I say about learn, learn, learn?

Most importantly, have fun. Games are about fun. Producing them can be hard work, but GDC should not feel like work. It should feel like a grand adventure, where you explore parts of the game development experience you haven’t before, an experience of discovery where you recharge your batteries, reconnect with your field, and return home eager to start coding games once again.

-the Centaur

Pictured: The GDC North Hall staircase, with the mammoth holographic projected GDC logo hovering over it. Note: there is no mammoth holographic projected logo. After that, breakfast at Mo'z, the Expo floor, the Roundtables, and lunch at Chevy's.

Approaching 33, Seen from 44

centaur 0

33-to-44.png

I operate with a long range planning horizon – I have lists of what I want to do in a day, a week, a month, a year, five years, and even my life. Not all my goals are fulfilled, of course, but I believe in the philosophy “People overestimate what they can do in a year, but underestimate what they can do in a decade.”

Recently, I’ve had that proven to me.

I’m an enormous packrat, and keep a huge variety of old papers and materials. Some people deal with clutter by adopting the philosophy “if you haven’t touched it in six months, throw it away.” Clearly, these people don’t write for a living.

So, in an old notebook, uncovered on one of my periodic archaeological expeditions in my library, I found an essay – a diary entry, really – written just before my 33rd birthday, entitled “Approaching 33” – and I find its perspective fascinating, especially when you compare what I was worried about then with where I am now.

“Approaching 33” was written on the fifth of November, 2011. That’s about five years after I split with my ex-fiancee, but a year before I met my future wife. It’s about a year after I finished my nearly decade-long slog to get my PhD, but ten years before when I got a job that truly used my degree. It’s about seven months after I reluctantly quit the dot-com I helped found to care for my dying father, but only about six months after my Dad actually died. And it’s about 2 months after 9/11, and about a month after disagreements over 9/11 caused huge rifts among my friends.

In that context, this is what I wrote on the fifth of November, 2011:

Approaching 33, your life seems seriously off-track. Your chances of following up on the PhD program are minimal – you will not get a good faculty job. And you are starting too late to tackle software development; you are behind the curve. Nor are you on track for being a writer.

The PhD program was a complete mistake. You wasted ten years of your life on a PhD and on your ex-fiancee. What a loser.

Now you approach middle fucking age – 38 – and are not on the career track, are not on the runway. You are stalled, lacking the crucial management, leadership and discipline skills you need to truly succeed.

Waste not time with useful affirmations – first understand the problem, set goals, fix things and move on. It is possible, only if you face clearly the challenges which are ahead of you.

You need to pick and embrace a career and a secondary vocation – your main path and your entertainment – in order to advance at either.

Without focus, you will not achieve. Or perhaps you are FULL OF SHIT.

Think Nixon. He had major successes before 33, but major defeats and did not run for office until your age. You can take the positive elements of his example – learn how to manage now, learn discipline now, learn leadership now, by whatever means are morally acceptable.

Then get a move on your career – it is possible. Do what you gotta do and move on with your life!

It appears I was bitter.

Apparently I couldn’t emotionally imagine I could succeed, but recognized, intellectually, that if I focused on what was wrong, and worked at it, then maybe, just maybe, I could fix it. And in the eleven years that have past … I mostly have.

Eleven years ago, I was enormously bitter, and regretted getting my PhD. It took five years, but that PhD and my work at my search-engine dot-com helped land me a great job, and after five more years of work I ended up at a job within that job that used every facet of my degree, from artificial intelligence to information retrieval to robotics to even computer graphics. My career took a serious left turn, but I never gave up trying, and eventually, I succeeded as a direct result of trying.

Eleven years ago, I felt enormously alone, having wasted a lot of time on a one-sided relationship that should have ended naturally after its first year, and having wasted many years after that either alone or hanging on to other relationships that were doomed not to work. But I never stopped looking, and hoping, and it took another couple of years before I found my best friend, and later married her.

Eleven years ago, I felt enormously unsure of my abilities as a software developer. At the dot-com I willingly stepped back from a software lead role when I was asked to deliver on an impossible schedule, a decision that was proved right almost immediately, and later took a quarter’s leave to finish my PhD, a decision that took ten years to prove itself. But even though both of those decisions were right, they started a downward spiral of self-confidence, as we sought out and brought in faster, more experienced developers to take over when I stepped back. While my predictions about the schedule were right, my colleagues nevertheless got more done, more quickly, ultimately culling out almost all of the code I wrote for the company. After a while, I felt I was contributing no more and, at the same time, needed to care for my dying father, so I left. But my father died shortly thereafter, six months before we expected. I found myself unable not to work, thinking it irresponsible even though I had savings, so I found a job at a software company whose technical lead was an old friend that who had been the fastest programmer I’d ever worked with in college, and now who had a decade of experience programming in industry – which is far more rigorous than programming in academia. On top of that, I was still recuperating from an RSI scare I’d had four years earlier, when I’d barely been able to write for six months, much less type. So I wrote those bitter words above when I was quite uncertain about whether I’d be able to cut it as a software developer.

Eleven years later — well, I still wish I could code faster. I’m surrounded by both younger and older programmers who are faster and snappier than I am, and I frequently feel like the dumbest person in the room. But I’ve worked hard to improve, and on top of that, slowly, I’ve come to recognize that I have indeed learned a few things – usually, the hard way, when I let someone talk me out of what I’m sure I know, and am later proved right – and have indeed picked up a few skills – synthetic and organizational skills, subtle and hard to measure, which aren’t needed for a small chunk of code but which are vital as projects grow larger in size and design docs and GANTT charts are needed to keep everything on track. I’d still love to code faster, to get up to speed faster, to be able to juggle more projects at once. But I’m learning, and I’ve launched things as a result of what I’ve learned.

But the most important thing is that I’ve been writing. A year after I wrote that note, I gave National Novel Writing Month a try for the first time. I spent years trying to perfect my craft after that, ultimately finding a writing group focused just on writing and not on critique. Five years later, I gave National Novel Writing Month another try, and wrote FROST MOON, which went on to both win some minor awards and to peak high on a few minor bestseller lists. Five years after that, I’ve finished four novels, have starts to four more, and am still writing.

I have picked my vocation and avocation – I’m a computer programmer, and a writer. I actually think of it as having two jobs, a day job and a night job. At one point I thought I was going to transition to writing full time, and I still plan to, but then my job at work became tremendously exciting. Ten years from now, I hope to be a full time writer (and I already have my next “second job” picked out) but I’m in no rush to leave my current position; I’m going to see where it takes me. I learned that long ago when I had a chance to knuckle down and finish my PhD, or join an unrelated but exciting side project to build a robot pet. The choice to work on the emotion model for that pet indirectly landed me a job at two different search engines, even though it was the skills I learned in my PhD that I was ultimately hired for. The choice to keep working on that emotion model directly led to my current dream job, which is one of the few jobs in the world that required the combined skills of my PhD and side project. Now I’m going to do the same thing: follow the excitement.

Who knows where it will lead? Maybe it will help me develop the leadership skills that I complained about in “Approaching 33.” Maybe it will help me re-awaken my research interests and lead to that faculty job I wanted in “Approaching 33.” Maybe it will just help me build a nest egg so when I finally switch to writing full time, I can pursue it with gusto. Or maybe, just maybe, it’s helping me learn things I can’t even yet imagine how I’ll be using … when I turn 55.

After I sign off this blogpost, I’m going to write “Passing 44.” Most of that’s going to be private, but I can anticipate it. I’ll complain about problems I want to fix with my writing – I want it to be more clear, more compelling, more accessible. I’ll complain about problems I want to fix at work – I want to work faster, to ramp up more quickly, and to juggle more projects well while learning when to say no. And I’ll complain about martial arts and athletics – I want to ramp up working out, to return to running, and to resume my quest for a black belt. And there are more things I want to achieve – wanting to be a better husband, friend, pet owner, person – a lot of which I’m going to keep private until I write “Passing 44, seen from 55.”

I’m going to set bigger goals for the next ten years. Some of them might not come to pass, of course. I bet a year from now, I’ll have only seen the barest movement along some of those axes. But ten years from now … the sky’s the limit.

-the Centaur

Pictured: Me at 33 on the left, me at 44 on the right, over a backdrop shot at my home at 44, including a piece of art by my wife entitled "Petrified Coral".

Me and my dumb mouth

centaur 0

Screen shot 2012-11-23 at 11.33.42 PM.png

Axually, it's Dakota's dumb mouth at issue here, and while I'd love to include an extract ... ssh, SPOLIERS! But the point being, the day after Thanksgiving, I'm back on track for National Novel Writing Month. And this includes an evening hanging out with my friends at the wonderful Nola restaurant I'm so fond of. No pictures of that (phone battery gave out) but I do have a followup picture from my solo excursion to Cocola Cafe in Santana Row, where I finished out today's Nano:

IMG_20121123_230047.jpg

I've done Nano enough times that I probably could have skipped today and even tomorrow if I wanted, just to hang out with my friends who are in town (staying at another friend's house). But this "vacation" isn't really a vacation for me: it's a writecation. Writing really is like a second job now: if I want to be a writer, certain things have to get done. In this case, it's Nano, and sending off acceptances and rejections for DOORWAYS TO EXTRA TIME:

Screen shot 2012-11-23 at 11.48.52 PM.png

You'll note a little asymmetry there: my coeditor, who's done this before, is way ahead of me contacting people about their stories. And those are just the acceptances. Argh. And then I've got to respond to Trish's comments on my own story, which, while I was proud of it before, now looks like it will need a lot of work. Sigh. This is why I like working with editors, I tell myself, they make my stories better. Sob. At least Nano is on track:

spectral-iron-day23-progress-2.png

Of course, the second half of the story is a complete salsa, and I don't know where it's going, but there's a building, and it's on fire, and it's a spectral fire, that only starts once a year, and there's William Blake's spirit guide riding a tiger, and oh yeah Cinnamon wears a Santa hat, then threatens to punch him in the gut if she meets him in a dark alley. So yeah, I'm having fun, even if I briefly hit a little plateau there while recuperating from all that turkey.

spectral-iron-day23-progress-1.png

Now, more mountain to climb! Onward!

-the Centaur

Blitzing 24 Hour Comics Day 2012

centaur 0


stranded24hcd2012.png

24 Hour Comics Day is a challenge to create 24 pages of a new comic in 24 Hours. The challenge was originally conceived by comics whiz Scott McCloud in 1990, and the challenge was organized into a formal day by Nat Gertler in 2004. Now, eight years later, 24HCD is a global event in which thousands of people participate.

My first two tries at 24 Hour Comics Day were miserable failures in 2009 and 2010. My good friend Nathan Vargas also failed, and we started putting our heads together about how to succeed. For me, pulling a Jim Lee and taking a year off to massively cram at being a great artist might merit an angry note from my mortgage service provider, so we needed other options.

We analyzed how we failed, developed strategies and tutorial materials, and ultimately produced the Blitz Comics Survival Kit --- not called 24 Hour Comics Survival Kit because we didn't want to look like we were providing "official" materials; the Survival Kit was just our take on how to succeed, and we didn't even know whether it would work, because we hadn't done it yet.

As it turns out, the techniques in the Kit did work in 2011, not just for Nathan and me but also for a wide variety of other people as well. Nathan has worked hard to promote the ideas and concepts in the Kit while I've been a slack ass lazy bum writing novels, so since he works hard now Comics PRO distributes our materials as Participant Resources. But was our success a fluke?

Well, to test the theory, we tried it again. A few months before we reviewed our exercises and updated the Survival Kit, though website problems prevented us from updating the materials everywhere in time for the 24HCD event. We re-ran the tutorial we'd done before, and practiced a month or so in advance, cracking the knuckles so to speak, to get ready ...

Because yesterday was 24 Hour Comics Day.


missioncomics24hcd2012.jpg

We both succeeded, of course; me around 8:20am and Nathan an hour and a half so later. It was great to participate at the always wonderful Mission Comics, but unlike previous years where we were too zonked to think at all, this year we had an interesting and lively conversation about what we did, why we did it, why we're doing this, and how to make it better in the future.

And unlike last year, we're planning to meet next week, rather than a few months in advance. Hopefully there will be some great stuff to show you - such as our comics, which we finally may have a strategy to get online without fixing the server error that's been a pain in the patootie to fix. Next up: a 24 Hour Comics Day Timeline, like last year's. Stay tuned.

Now, home to bed, because at this point I've been up 32 and a half hours straight!

-the Centaur

Pictured: the last page of my 2012 24 Hour Comic, "Stranded Part 2", my adaptation of my own story "Stranded," published in the book STRANDED. Got that? Also pictured is a bunch of writers at Mission Comics and Art. Thanks Leef!

Prometheus is the movie you show your kids to teach them how not to do science

centaur 0

promvsthingalt.png  

Too Diplomatic for My Own Good

I recently watched Ridley Scott's Prometheus. I wanted to love it, and ultimately didn't, but this isn't a post about how smart characters doing dumb things to advance a plot can destroy my appreciation of a movie. Prometheus is a spiritual prequel to Alien, my second favorite movie of all time, and Alien's characters often had similar afflictions, including numerous violations of the First Rule of Horror Movies: "Don't Go Down a Dark Passageway Where No One Can Hear You if You Call For Help". Prometheus is a big, smart movie filled with grand ideas, beautiful imagery, grotesque monsters and terrifying scares. If I'd seen it before seeing a sequence of movies like Alien maybe I would have cut it more slack.

I could also critique its scientific accuracy, but I'm not going to do that. Prometheus is a space opera: very early on in the movie we see a starship boldly plying its way through the deeps, rockets blazing as it shoots towards its distant destination. If you know a lot of science, that's a big waving flag that says "don't take the science in this movie too seriously." If you want hard science, go see Avatar. Yes, I know it's a mystical tale featuring giant blue people, but the furniture of the movie --- the spaceship, the base, the equipment they use --- is so well thought out it could have been taken from Hal Clement. Even concepts like the rock-lifting "flux tube," while highly exaggerated, are based on real scientific ideas. Prometheus is not Avatar. Prometheus is like a darker cousin to Star Trek: you know, the scary cousin from the other branch you only see at the family Halloween party, the one that occasionally forgets to take his medication. He may have flunked college physics, but he can sure spin a hell of a ghost story.

What I want to do is hold up Prometheus as a bad example of how to do science. I'm not saying Ridley Scott or the screenwriters don't know science, or even that they didn't think of or even film sequences which showed more science, sequences that unfortunately ended up on the cutting room floor --- and with that I'm going to shelve my caveats. What I'm saying is that the released version of Prometheus presents a set of characters who are really poor scientists, and to show just how bad they are I'd like to compare them with the scientists in the 2011 version of The Thing, who, in contrast, do everything just about right.

But Wait ... What's a "Scientist"?

Good question. You can define them by what they do, which I'm going to try to do with this article.

But one thing scientists do is share their preliminary results with their colleagues to smoke out errors before they submit work for publication. While I make a living twiddling bits and juggling words, I was trained as (and still fancy myself as) a scientist, so I shared an early version of this essay with colleagues also trained as a scientist --- and one of them, a good friend, pointed out that there's a whole spectrum of real life scientists, from the careful to the irresponsible to the insane.

He noted "there's the platonic ideal of the Scientist, there's real-life science with its dirty little secrets, and then there's Hollywood science which is often and regrettably neither one of the previous two." So, to be clear, what I'm talking when I say scientist is the ideal scientist, Scientist-with-a-Capital-S, who does science the right way.

But to understand how the two groups of scientists in the two movies operate ... I'm going to have to spoil their plots.

Shh ... Spoilers

SPOILERS follow. If you don't want to know the plots of Prometheus and The Thing, stop reading as there are SPOILERS.

Both Prometheus and The Thing are "prequels" to classic horror movies, but the similarities don't stop there: both are stories about scientific expeditions to a remote place to study alien artifacts that prove unexpectedly dangerous when virulent, mutagenic alien life is found among the ruins. The Thing even begins with a tractor plowing through snow towards a mysterious, haunting signal, a shot which makes the tractor and its track look like a space probe rocketing towards its target --- a shot directly paralleling the early scenes of Prometheus that I mentioned earlier.

Both expeditions launch in secrecy, understandably concerned someone might "scoop" the discovery, and so both feature scientists "thrown off the deep end" with a problem. Because they're both horror movies challenging humans with existential threats, and not quasi-documentaries about how science might really work, both groups of scientists must first practice science in a "normal" mode, dealing with the expectedly unexpected, and then must shift to "abnormal" mode, dealing with unknown unknowns. "Normal" and "abnormal" science are my own definitions for the purpose of this article, to denote the two different modes in which science seems to get done in oh so many science fiction and horror movies --- science in the lab, and science when running screaming from the monster. However, as I'll explain later, even though abnormal science seems like a feature of horror movies, it's actually something that real scientists actually have a lot of experience with in the real world.

But even before the scientists in Prometheus shift to "abnormal" mode --- heck, even before they get to "normal" mode --- they go off the rails: first in how they picked the project in the first place, and second, in how they picked their team.

Why Scientists Pick Projects

You may believe Earth's Moon is made of cheese, but you're unlikely to convince NASA to dump millions into an expedition to verify your claims. Pictures of a swiss cheese wheel compared with the Moon's pockmarked surface won't get you there. Detailed mathematical models showing the correlations between the distribution of craters and cheese holes are still not likely to get you a probe atop a rocket; at best you'll get some polite smiles, because that hypothesis contradicts what we already know about the lunar surface. If, on the other hand, you cough up a spectrograph reading showing fragments of casein protein spread across the lunar surface, side by side with replication by an independent lab --- well, get packing, you're going to the Moon. What I'm getting at is that scientists are selective in picking projects --- and the more expensive the project, the more selective they get.

In one sense, science is the search for the truth, but if we look at the history of science, it isn't about proving the correctness of just any old idea: ideas are a dime a dozen. Science isn't about validating random speculations sparked by why different things look similar - for every alignment between the shoreline of Africa and South America that leads to a discovery like plate tectonics, there's a spurious match between the shape of the Pacific and the shape of the Moon that leads nowhere. (Believe it or not, this theory, which sounds ridiculous to us now, was a serious contender for the origin of the Moon many years, first proposed in 1881 by Osmond Fisher). Science is about following leads --- real evidence that leads to testable predictions, like not just a shape match between continents, but actual rock formations which are mirrored, down to their layering and fossils.

There's some subtlety to this. Nearly everybody who's not a scientist thinks that science is about finding evidence that confirms our ideas. Unfortunately, that's wrong: humans are spectacularly good at latching on evidence that confirms our ideas and spectacularly bad at picking up on evidence that disconfirms them. So we teach budding scientists in school that the scientific method depends on finding disconfirming evidence that proves bad ideas wrong. But experienced scientists funding expeditions follow follow precisely the opposite principle, at least at first: we need to find initial evidence that supports a speculation before we follow it up by looking for disconfirming evidence.

That's not to say an individual scientist can't test out even a wild and crazy idea, but even an individual scientist only has one life. In practice, we want to spend our limited resources on likely bets. For example, Einstein spent the entire latter half of his life trying to unify gravitation and quantum mechanics, but he'd probably have been better off spending a decade each on three problems rather than spending thirty years in complete failure. When it gets to a scientific expedition with millions invested and lives on the line, the effect is more pronounced. We can't simply follow every idea: we need good leads.

Prometheus fails this test, at least in part. The scientists begin with a good lead: in a series of ancient human cultures, none of whom have had prior contact, they find almost identical pictures, all of which depict an odd tall creature pointing to a specific constellation in the sky not visible without a telescope, a constellation with a star harboring an Earthlike planet. As leads go, that's pretty good: better than mathematical mappings between Swiss cheese holes and lunar crater sizes, but not quite as good as a spectrograph reading. It's clearly worth conducting astronomical studies or sending a probe to learn more.

But where the scientists fail is they launch a trillion dollar expedition to investigate this distant planet, an expedition which, we learn later, was actually bankrolled not because of the good lead but because of a speculation by Elizabeth, one of the paleontologists, that the tall figure in the ancient illustration is an "Engineer" who is responsible for engineering humanity, thousands of years ago. This speculation is firmly back in the lunar cheese realm because, as one character points out, it contradicts an enormous amount of biological evidence. What makes it worse is that Elizabeth has no mathematical model or analogy or even myth to point to on why she believes it: she says she simply chooses to believe it.

If I was funding the Prometheus expedition, I'd have to ask: why? Simply saying she later proves to be right is no answer: right answers reached the wrong way still aren't good science. Simply saying she has faith is not an answer; that explains why she continues to hold the belief, but not how she formed it in the first place. Or, more accurately, how she justified her belief: as one of of my colleagues reading this article pointed out, it really doesn't matter why she came to believe it, only how she came to support it. After all, the chemist Kekulé supposedly figured out benzene's ring shape after dreaming about a snake biting its tail --- but he had a lot of accumulated evidence to support that idea once he had it. So, what evidence led Elizabeth to believe that her intuition was correct?

Was there some feature of the target planet that makes it look like it is the origin of life on Earth? No, from the descriptions, it doesn't seem Earthlike enough. Was there some feature of the rock painting that makes the tall figures seem like they created humans? No, the figure looks more like a herald. So what sparked this idea in her? We just don't know. If there was some myth or inscription or pictogram or message or signal or sign or spectrogram or artifact that hinted in that direction, we could understand the genesis of her big idea, but she doesn't tell us, even though she's directly asked, and has more than enough time to say why using at least one of those words. Instead, because the filmmakers are playing with big questions without really understanding how those kinds of questions are asked or answered, she just says it's what she chooses to believe.

But that's not a good reason to fund a trillion dollar scientific expedition. Too many people choose to believe too many things for us to send spacecraft to every distant star that someone happens to wish upon --- we simply don't have enough scientists, much less trillions. If you want to spend a trillion dollars on your own idea, of course, please knock yourself out.

Now, if we didn't know the whole story of the movie, we could cut them slack based on their other scientific lead, and I'll do so because I'm not trying to bash the movie, but to bash the scientists that it depicts. And while for the rest of this article I'm going to be comparing Prometheus with The Thing, that isn't fair in this case. The team from Prometheus follows up a scientific lead for a combination of reasons, one pretty good, one pretty bad. The team from The Thing finds a fricking alien spacecraft, or, if you want to roll it back further, they find an unexplained radio signal in the middle of a desert which has been dead for millions of years and virtually uninhabited by humans in its whole history. This is one major non-parallel between the two movies: unlike the scientists of Prometheus, who had to work hard for their meager scraps of leads, the scientists in The Thing had their discovery handed to them on a silver platter.

How Scientists Pick Teams

Science is an organized body of knowledge based on the collection and analysis of data, but it isn't just the product of any old data collection and analysis: it's based on a method, a method which is based on analyzing empirical data objectively in a way which can be readily duplicated by others. Science is subtle and hard to get right. Even smart, educated, well-meaning people can fool themselves, so it's important for the people doing it to be well trained so that common mistakes in evidence collection and reasoning can be avoided.

Both movies begin with real research to establish the scientific credibility of the investigators. Early in Prometheus, the scientists Elizabeth and Charlie are shown at an archaeological dig, and later the android David practices some very real linguistics --- studying Schleicher's Fable, a highly speculative but non-fictional attempt to reconstruct early human languages --- to prepare for a possible meeting with the Engineers that Elizabeth and Charlie believe they've found. Early in The Thing, Edvard's team is shown carefully following up on a spurious radio signal found near their site, and the paleontologist Kate uses an endoscope to inspect the interior of a specimen extracted from pack ice (just to be clear, one not related to Edvard's discovery).

But in Prometheus, things almost immediately begin to go wrong. The team which made the initial discovery is marginalized, and the expedition to study their results is run by a corporate executive, Meredith, who selects a crew based on personal loyalty or willingness to accept hazard pay. Later, we find there are good reasons why Meredith picked who she did --- within the movie's logic, well worth the trillion dollars her company spent bankrolling the expedition --- but those criteria aren't scientific, and they produce an uninformed, disorganized crew whose expedition certainly explores a new world, but doesn't really do science.

The lead scientist of The Thing, Edvard, in contrast, is a scientist in charge of a substantial team on a mission of its own when he makes the discovery that starts the movie. He studies it carefully before calling in help, and when he does call in help, he calls in a close friend --- Sander, a dedicated scientist in his own right, so world-renowned that Kate recognizes him on sight. He in turn selects Kate based on another personal recommendation, because he's trying to select a team of high caliber. Sander clashes with Kate when she questions his judgment, but these are just disagreements and don't lead to foul consequences.

In short, The Thing picks scientists to do science, and this difference from Prometheus shows up almost immediately in how they choose to attack their problems.

Why Scientists Don't Bungee Jump Into Random Volcanoes

Normal science is the study of things that aren't unexpectedly trying to kill you. There may be a hazardous environment, like radiation or vacuum or political unrest, and your subject itself might be able to kill you, like a virus or a bear or a volcano, but in normal science, you know all this going in, and can take adequate precaution. Scaredycats who aren't willing to study radioactive bears on the surface of Mount Explodo while dodging the rebel soldiers of Remotistan should just stay home and do something safe, like simulate bear populations on their laptops using Mathematica. The rest of us know the risks.

Because risk is known, it's important to do science the right way. To collect data not just for the purposes of collecting it, but to do so in context. If I've seen a dozen bees today, what conclusions can you draw? Nothing. You don't know if I'm in a jungle or a desert or even if I'm a beekeeper. Even if I told you I was a beekeeper and I'd just visited a hive, you don't even know if a dozen bees is a low number, a high number, or totally unexpected. Is it a new hive just getting started, or an old hive dying out? Is it summer or winter? Did I record at noon or midnight? Was I counting inside or outside the hive? Even if you knew all that, you can interpret the number better if you know the recent and typical statistics for beehives in that region, plus maybe the weather, plus ...

What I'm getting at that it does you no good as a scientist to bungee jump into random volcanoes to snap pictures of bubbling lava, no matter how photogenic that looks on the cover of National Geographic or Scientific American. Science works when we record observations in context, so we can organize the data appropriately and develop models of its patterns, explanations of its origins and theories about its meaning. Once again, there's a big difference in the kind of normal-science data collection depicted in Prometheus and The Thing. With one or two notable exceptions, the explorers in Prometheus don't do organized data collection at all - they blunder around almost completely without context.

How (Not) to Do Normal Science

In Prometheus, after spending two whole years approaching the alien world LV223, the crew lands and begins exploring without more than a cursory survey. We know this because the ship arrives on Christmas, breaks orbit, flies around seemingly at random until one of our heroes leaps from his chair because he's sighted a straight line formation, and then the ship lands, disgorging a crew of explorers eager to open their Christmas presents. We can deduce from this that less than a day has passed from arrival to landing, which is not enough time to do enough orbits to complete a full planetary survey. We can furthermore deduce that the ship had no preplanned route because then the destination would not have been enough of a surprise for our hero to leap out of his chair (despite the seat-belt sign) and redirect the landing. Once the Prometheus lands, the crew performs only a modest atmospheric survey before striking out for the nearest ruin. In true heroic space opera style this ruin just happens to have a full stock of all the interesting things that they might want to encounter, and as a moviegoer, I wasn't bothered by that. But it's not science.

Planets are big. Really big. The surface area of the Earth is half a billon square kilometers. The surface area of a smaller world, one possibly more like LV223, is just under a hundred fifty million square kilometers. You're not likely to find anything interesting just by wandering around for a few hours at roughly the speed of sound. The crew is shown to encounter a nasty storm because they don't plan ahead, but even an archaeological site is too big to stumble about hoping to find something, much less the mammoth Valley of the Kings style complex the Prometheus lands in. Here the movie both fails and succeeds at showing the protagonists doing science: they blunder out on the surface despite having perfectly good mapping technology (well, speaking as this is one of my actual areas of expertise, really awesome mapping technology), which they later use to map the inside of a structure, enabling one of the movie's key discoveries. (The other key discovery is made as a result of David spending two years studying ancient languages so he can decipher and act on alien hieroglyphs, and he has his own motives for deliberately keeping the other characters in the dark, so props to the filmmakers there: he's doing bad science for his team, but shown to be doing good science on his own, for clearly explained motives).

SO ANYWAY, a scientific expedition would have been mapping from the beginning to provide context for observations and to direct explorations. A scientific expedition would have released an army of small satellites to map the surface; left them up to predict weather; launched a probe to assess ground conditions; and, once they landed, launched that awesome flock of mapping drones to guide them to the target. The structure of the movie could have remained the same - and still shown science.

The Thing provides an example of precisely this behavior. The explorers in The Thing don't stumble across it. They're in Antarctica on a long geological survey expedition to extract ice cores. They've mapped the region so thoroughly that spurious radio transmissions spark their curiosity. Once the ship and alien are found, they survey the area carefully in both horizontal and vertical elevation, build maps, assess the structure of the ice, and set up a careful archeological dig. When the paleontologist Kate arrives, they can tell her where the spacecraft and alien are, roughly how long the spacecraft has been there, and even estimate the fracturability of the ice is like around the specimen based on geological surveys, and already have collected all the necessary equipment. Kate is so impressed she exclaims that the crew of the base doesn't really need her. And maybe they don't. But they're careful scientists on the verge of a momentous discovery, and they don't want to screw it up.

Real Scientists Don't Take off Their Helmets

Speaking of screwing up momentous discoveries, here's a pro tip: don't take off your helmet on an alien world, even if you think the atmosphere is safe, if you later plan to collect biological samples and compare them with human DNA, as the crew does in Prometheus. Humans are constantly flaking off bits of skin and breathing out droplets of moisture filled with cells and fragments of cells, and taking off a helmet could irrevocably contaminate the environment. The filmmakers can't even point to the idea that you could tell human from alien DNA because ultimately chemicals are chemicals: the way you tell human from alien DNA is to collect and sequence it, and in an alien environment filled with unknown chemicals, human-deposited samples could quickly break down into something that looked alien. You might get lucky ... but you probably won't. Upon reading this article, one of my colleagues complained to me that this was an unfair criticism because it's a simply a filmmaker's convention to let the audience see the faces of the actors, but regardless of whether you buy that for the purpose of making an engaging space opera with great performances by fine actors, it nevertheless portrays these scientists in a very bad light. No crew of careful scientists is going to take off their helmets, even if they think they've mysteriously found a breathable atmosphere.

The movie Avatar gets this right when, even in a dense jungle, one character notices another open a sample container with his mouth (to keep his hands free) and points out that he's contaminated the sample. The Thing also addresses the same issue: one key point of contention between paleontologist Kate and her superior Sander is that Sander wants to take a sample to confirm that their find is alien and that Kate does not because she doesn't want the sample to be contaminated. Both are right: Kate's more cautious approach preserves the sample, while Sander's more experienced approach would have protected the priority of his discovery from other labs if it really was alien, or let them all down early if the sample just was some oddly frozen Earth animal. My sympathy is with Kate, but my money is actually on Sander here: with a discovery as important as finding alien life on Earth, it's critically important to exclude as soon as possible the chance that what we've found is actually a contorted yak. More than enough of the sample remained undisturbed, and likely uncontaminated, to guard against Kate's fears.

Unfortunately, neither the crew of Prometheus or The Thing get the chance to be proved lucky or right.

How (Not) to Do Abnormal Science

Abnormal science is my term for what scientists do when "everything's gone to pot" and lives are on the line. This happens more often than you might think: the Fukushima Daiichi nuclear disaster and the Deepwater Horizon oil spill are two recent examples. Strictly speaking, what happens in abnormal science isn't science, that is, the controlled collection of data designed to enhance the state of human knowledge. Instead, it's crisis mitigation, a mixture of first responses, disaster management and improvisational engineering designed to blunt the unfolding harm. Even engineering isn't science; it's a procedure for tackling a problem by methodically collecting what's known to set constraints on a library of best practices that are used to develop solutions. The tools of science may get used in the improvisational engineering that happens after a disaster, but it's rarely a controlled study: instead, what gets used are the collected data, the models, the experimental methods and more importantly the precautions that scientists use to keep themselves from getting hurt.

One scientific precaution often applied in abnormal science which Prometheus and The Thing both get right is quarantine. When dealing with a destructive transmissible condition, like an infectious organism or a poisonous material, the first thing to do is to quarantine it: isolate the destructive force until it's neutralized, until the vector of spread is stopped, or until the potential targets are hardened or inoculated. After understandable moments of incredulity, both the crew of the Prometheus and The Thing implement quarantines to stop the spread of the biological agent and then decisively up the ante once its full spread is known.

The next scientific precaution applied in abnormal science is putting the health of team members first. So, for goodness's sake, if you've opened your helmet on an alien world, start feeling under the weather, and then see a tentacle poke out of your eye, don't shrug it off, put your helmet back on and venture out onto a hostile alien world as part of a rescue mission! On scientific expeditions, ill crewmembers do not go on data collection missions, nor do they go on rescue missions. That's just putting yourself and everyone else around you in danger - and the character in question in Prometheus pays with his life for it. In The Thing, in contrast, when a character gets mildly sick after an initial altercation, the team immediately prepares to medivac him to safety (this is before the need for a quarantine is known).

Another precaution observed in abnormal science is full information sharing. In both the Fukushima Daiishi and Deepwater Horizon disasters, lack of information sharing slowed down the potential response to the disaster - though in the Fukushima case it was a result of the general chaos of a country-rocking earthquake while in the Deepwater Horizon case it was a deliberate and in some cases criminal effort at information hiding in an attempt to create positive spin. The Prometheus crew has even the Deepwater Horizon event beat. On a relatively small ship, there are no less than seven distinct groups, all of whom hide critical information from each other - sometimes when there's not even a good motivation to. (For the record, these groups are (1) the mission sponsor Weyland who hides himself and the real mission from the crew, (2) the mission leader Meredith who's working for and against Weyland, (3) the android David who's both working with and hiding information from Weyland, Meredith, the crew and everyone else, (4) the regular scientific crew trying to do their jobs, (5) the Captain who then directs the crew via a comlink and then hides information for no clear reason, (6) the scientist Charlie who hides information about his illness from the crew and his colleague and lover Elizabeth, and finally (7) Elizabeth, who like the crew is just trying to do her job, but ends up having to hide information about her alien "pregnancy" from them to retain freedom of action). There are good story reasons why everyone ends up being so opposed, but as an example of how to do science or manage a disaster ... well, let's say predictable shenanigans ensue.

In The Thing, in contrast, there are three groups: Kate, who has a conservative approach, Sander, who has a studious approach, and everyone else. Once the shit hits the fan, both Kate and Sander share their views with everyone in multiple all-hands meetings (though Sander does at one point try to have a closed door meeting with Kate to sort things out). Sander pushes for a calm, methodical approach, which Kate initially resists but then participates with, helping her make key discoveries which end up detecting the alien presence relatively early. Then Kate pushes for a quarantine approach, which Sander resists but then participates in, volunteering key ideas which the alien force thinks are good enough to try to sabotage. Only at the end, when Kate suggests a test that the uninfected Sander knows full well will result in a false positive result for him, do they really get at serious loggerheads - but they're not given a chance to resolve this, as the science ends and the action movie starts at that point.

The Importance of Peer Review

I enjoyed Prometheus. I saw it twice. I'll buy it on DVD or Blu-Ray or something. I loved its focus on big questions, which it raised and explored and didn't always answer. It was pretty and gory and pretty gory. It pulled off the fair trick of adding absolute classic scenes to the horror genre, like Elizabeth's self-administered Ceasarean section, and absolute classic scenes to the scifi genre, like the David in the star map sequence - and perhaps even the crashing alien spacecraft inexorably rolling towards our heroes counts as both classic horror and classic science fiction at the same time.

But as Ridley Scott was quoted as saying, Prometheus was a movie, not a science lesson. The Thing is too. Like Prometheus, the accuracy of the scientific backdrop of The Thing is a full spectrum mixture of dead on correct (the vastness of space) to questionable (where do the biological constructs created by the black goo in Prometheus get their added mass? how can the Thing possibly be so smart that it can simulate whole humans so well that no-one can tell them apart?) to genre tropes (faster than light travel, alien life being compatible with human life) to downright absurd (humanoid aliens creating human life on Earth, hyperintelligent alien monsters expert at imitation screaming and physically assaulting people rather than simply making them coffee laced with Thing cells).

I'm not going to pretend either movie got it right. Neither Prometheus nor The Thing are good sources of scientific facts --- both include a great deal of cinematic fantasy.

But one of them can teach you how to do science.

-the Centaur

Pictured: a mashup of The Thing and Prometheus's movie posters, salsa'd under fair use guidelines.

Thanks to: Jim Davies, Keiko O'Leary, and Gordon Shippey for commenting on early drafts of this article. Many of the good ideas are theirs, but the remaining errors are my own.

The Future of Warfare

centaur 0

ogre-4.jpg

Every day, a new viral share sparks through the Internet, showing robots and drones and flying robot drones playing tennis while singing the theme to James Bond. At the same time we've seen shares of area-denying heat rays and anti-speech guns that disrupt talking ... and it all starts to sound a little scary. Vijay Kumar's TED talk on swarms of flying robots reminded me that I've been saying privately to friends for years years that the military applications of flying robots are coming ... for the first time, we'll have a technology that can replace infantry at taking and holding ground.
The four elements of military power are infantry, who take and hold ground, cavalry, which break up infantry, artillery, which softens up positions from a distance, and supply, which moves the first three elements into position. In our current world those are still human infantry, human piloted tanks, human piloted bombers, and human piloted aircraft carriers.
We already have automated drones for human-free (though human-controlled) artillery strikes. Soon we will have the capacity to have webs of armed flying robots acting as human-free infantry holding ground. Autonomous armored vehicles acting as human-free cavalry are farther out, because the ground is a harder problem than the air, but they can't be too far in the future. Aircraft carriers and home bases we can assume can be manned for a while.

So then soon, into cities that have been softened up by drone strikes, we'll have large tanks like OGREs trundling in serving as refueling stations for armies of armored flying helicopters who will spread out to control the ground. No longer will we need to throw lives away to hold a city ... we'll be able to do it from a distance with robots. One of the reasons I love The Phantom Menace is that is shows this kind of military force in action.
Once a city is taken, drones can be used for more than surveillance ... a drone with the ability to track a person can become a flying assassin, or at least force someone to ditch any networked technology. Perhaps they'll even be able to loot items or, if they're large and able enough, even kidnap people.
It would be enormously difficult to fight such a robotic force. A robotic enemy can use a heat ray to deny people access to an area or a noise gun to flush them out. Camera detection technology can be used to flush out anyone trying to deploy countermeasures. Radar flashlights can be used to find hiding humans by their heartbeats, speech jammers can be used to prevent them from coordinating, and face detection you probably have on your phone will work against anyone venturing out in the open. I've seen a face detector in the lab combined with a targeting system and a nerf gun almost nail someone ... and now a similar system is in the wild. The system could destroy anyone who had a face.
And don't get me started on terminators and powered armor.
Now, I am a futurist, transhumanist, Ph.D. in artificial intelligence, very interested in promoting a better future ... but all too familiar with false prophecies of the field. Critics of futurism are fond of pointing out that many glistening promises of the future have never come to pass. But we don't need a full success for these technologies to be deployed. Many of the pieces already exist, and even if they're partially deployed, partially effective mostly controlled by humans ... they could be awesome weapons of warfare ... or repression.
The future of warfare is coming. And it's scary. I'd say I don't think we can stop it, and on one level I don't ... but we've had some success in turning back from poison gas, are making progress on land mines, and maybe even nuclear weapons. So it is possible to step back from the brink ... but I don't want to throw the baby out with the bathwater the way we seem to have done with nuclear power (to the climate's great detriment). As my friend Jim Davies said to me, 99% of the technologies we'd need to build killbots have nothing to do with killbots, and could do great good.
In the Future of Warfare series on this blog, I'm going to monitor developing weapons trends, both military systems and civilian technologies, realistic and unrealistic, in production and under speculation. I'm going to try to apply my science fiction writer's hat to imagine possible weapons systems, my scientist's hat to explore the technologies to build them, and my skeptic's hat to help discard the ones that don't hold water. Hint: it's highly likely people will invent new ways to hurt each other ... but highly unlikely that Skynet will decide our fate in a millisecond.
A bright future awaits us in the offworld colonies ... but if we want to get there, we need to be careful about the building blocks we use.
-the Centaur
Pictured: an OGRE miniature. This blogpost is an expansion of an earlier Google+ post.

efface[john-mccarthy;universe]

centaur 0
John McCarthy, creator of Lisp and one of the founders of the field of artificial intelligence, has died. He changed the world more than Steve Jobs ... but in a far subtler way, by laying the foundation for programs like Apple's Siri through his artificial intelligence work, or more broadly by laying the foundation for much of modern computing through innovations like the IF-THEN-ELSE formalism. It's important not to overstate the impact of great men like John and Steve; artificial intelligence pioneers like Marvin Minsky would have pushed us forward without John, and companies like Xerox and Microsoft would have pushed us forward without Steve. But we're certainly better off, and farther along, with their contributions. I have only three stories to tell about John McCarthy. The third story is that I last saw him at a conference at IBM, in a mobile scooter and not looking very well. Traveling backwards in time, the second story is that I spoke with one of his former graduate students, who saw a John McCarthy poster in my office, and told me John's illness had progressed to the point where he basically couldn't program any more and that he was feeling very sad about it. But what I want to remember is my first encounter with John ... it's been a decade and a half, so my memory's fuzzy, but I recall it was at AAAI-97 in Providence, Rhode Island. I'd arrived at the conference in a terrible snafu and had woken up a friend at 4 in the morning because I had no place to stay. I wandered the city looking for H.P. Lovecraft landmarks and had trouble finding them, though I did see a house some think inspired Dreams in the Witch House. But near the end, at a dinner for AI folks, I want to say at Waterplace Park but I could be misremembering, I bumped in to John McCarthy. He was holding court at the end of the table, and as the evening progressed I ended up following him and a few friends to a bar, where we hung out for an evening. And there, the grand old man of artificial intelligence, still at the height of his powers, regaled the wet-behind-the-ears graduate student from Atlanta with tales of his grand speculative ideas, beyond that of any science fiction writer, to accelerate galaxies to the speed of light to save shining stars from the heat death of the universe. We'll miss you, John. -Anthony Image stolen shamelessly from Zach Beane's blog. The title of this post is taken from the Lisp 1.5 Programmer's Manual, and is the original, pre-implementation Lisp M-expression notation for code to remove an item from a list.

Taking Criticism

centaur 0
At Comic-Con I catch up with a lot of old buddies, particularly one of the Edge who's solidered through many drafts of my early stories. He's got a script he's working on, and is making a lot of progress. In contrast we know a friend who's written a dozen scripts and is making no progress at all. Why? One of the conclusions we came to is that it's important to accept criticism of your work. Timely feedback is critical to improved performance - but you must respond to it. I think writers should put down all their dumb ideas and then convince everyone that they're brilliant. Your quirky ideas are your contribution - I mean, who'd think a story about a naked blue guy and a homeless vigilante investigating a murder would make one of the greatest comics of all time, but hey, that's Watchmen. But you've got to sell those ideas. "Ideas are a dime a dozen, but a great implementation is priceless." So if you show someone your story with a naked blue superhero and they don't buy it - you have to fix your story. That doesn't mean you take out the naked blue guy, even if your critics want you to. It's your story, and just because it doesn't work for someone they may not know the right way to fix it. It's up to you, the author, to figure out how to solve the problem. Readers give bad advice about how to fix stories because people are notoriously bad at introspection. If someone gets a funny bad feeling about the manuscript, they may latch on to the most salient unusual feature - not realizing it's the bad dialogue or structure which gives them indigestion. But authors are also notoriously bad at accepting criticism because they take the criticism as a personal attack. But if you get criticism on your story, you've done a great thing: you've produced a story that can be evaluated. Authors are also bad at accepting criticism because they have fragile little egos. But you can't afford to explain everything away. If people are complaining about your story, they did so for a reason. You need to figure out what that is - and it's your problem, not theirs. So, if you get criticism on your story you don't think is fair, you get one --- ONE --- chance to explain yourself. If your critic doesn't immediately get it, then --- even if you don't agree --- say, "Yes, thank you, I'll take it under advisement." Then put it in your trip computer and remember it for later. If others see the same thing, you have a problem. If you personally start to feel even slightly the same way, you have a BIG problem. But your biggest problem is not taking criticism at all. Me and my friend have encountered a fair number of leaders whose egos are so fragile they've insulated themselves from all criticism. You can still achieve some degree of success in an echo chamber if you're willing to critique yourself and you have high artistic standars. But usually it just makes for unnecessarily flawed stories, movies and products - and an unnecessary slide towards the dustbin when your ideas stop working. So if you're lucky enough to have someone who reads your pre-baked work and gives you feedback, listen carefully, explain at most once, and take the criticism gracefully. Your art will be the better for it in the long run. taking criticism graciously -the Centaur

Oh, the point … what Warren Ellis uses.

centaur 0
books, montalbano, reflected books, and gabby Oh, there was a reason I got on the Warren Ellis kick. He posted a note on what he uses to write. Maybe I'll me-too sometime and post a note on the tools I use, already having done the why and the how, but for now I wanted to focus on the following piece of wisdom from Warren Ellis which should be familiar to anyone who's ever worked on a Ph.D. thesis:
Back-ups. Oh, my god. Burning your stuff to CD or DVD is not good enough. Trust me on that. Things go wrong. Understand that Storage Will Always Fail. Always. I have a ruggedised, manly and capacious 32GB USB memory stick that can withstand fire, water, gunshots and the hairy arseteeth of Cthulhu itself — but my daughter decided she wanted to liberate one of my bags for her use, took the stick out of it and put it ’somewhere safe.’ It has never been seen again. Storage Will Always Fail. Dropbox is your friend. 2GB of storage for free, a frankly superb little piece of software that syncs your stuff off into the cloud as easily and simply and clearly as possible. I know writers, artists and tv producers who swear by Dropbox, and so do I. I have Dropbox on both computers. If you have a smart phone of the iOS or Android type, you can also have an Dropbox instance on your phone, a fact that’s saved my arse more than once. I also auto-sync Computer 1 hourly to Jungle Disk. Very cheap, very good. My media library lives on another storage service, Zumodrive, that lives both in the cloud and on my machine as a z:/ drive. (The Zumodrive application also lives on Computer 2.) Also, I do all mail through Gmail. Which means that a copy of every document I send off lives in the Gmail cloud. And every five minutes or so, a Western Digital 1TB MyBook copies everything on Computer 1’s desktop. Paranoid? Yes. Covered? Yes.
Got that, everyone? If you write, especially if you want to do it for a living, go do something like this. And for God's sake, please, keep a copy offsite. I know too many people who have lost their homes and their art or writing to fire. -the Centaur Pictured: Books, Montalbano, reflected books, and Gabby - a reminder to me that my library is a potential firetrap (God forbid!) and that I should be better at storing stuff offsite.

Guest Posting for Blogathon at A Novel Friend

centaur 0
My friend from the DragonWriters, Trisha Wooldridge, is participating in the Blogathon - sort of the 24 Hour Comic Day for bloggers - and I'm sponsoring one slot with a donation to Bay State Equine Rescue and a guest post on "Greed and Charity". A teaser:
At the beginnings of their careers, a lot of authors and other creative types are obsessed with making money off what they produce and are deathly afraid of people stealing it. I've seen people charging their friends for copies of short stories printed in magazines, putting their artwork on the web behind passwords or with huge watermarks, or pricing their software out of reach of the people who want to buy it. But this doesn't help them - in fact, it hurts. And I'm here to tell you to give stuff away for free.
If you want to read the whole post, please check it out at her blog, A Novel Friend - it should go up sometime this weekend. -the Centaur

15 Books

centaur 0
shoulder cat sees farther

Recently I got nailed with the following note on Facebook or Myspace or some other damn thing:
"Don't take too long to think about it. Fifteen books you've read that will always stick with you. First fifteen you can recall in no more than 15 minutes. Copy the instructions into your own note, and be sure to tag the person who tagged you."
Well, neo-Luddite that I am, I don't want to encourage this whole walled-garden social networking thing, so I'm not going to post a note there until I can effortlessly crosspost with my blog and everywhere else. But I can come up with 15 books:
  • Godel, Escher Bach: An Eternal Golden Braid by Douglas Hofstadter
    Convinced me to get into Artificial Intelligence. I've probably read it half a dozen times. Has a fantastic layered structure that Hofstadter uses to great effect.
  • The Society of Mind by Marvin Minsky
    Opened my mind to new ways of thinking about thinking and AI. Also read it several times. Has a fantastic one-chapter-per-page format that really works well to communicate complicated ideas very simply.
  • The Feynman Lectures on Physics by Feynman, Leighton and Sands
    Taught me more about physics than the half-dozen classes I took at Georgia Tech. I've read it now about four times, once on paper (trying to work out as many derivations as I could as I went) and three times on audiobook.
  • Programming Pearls by Jon Bentley
    Opened my mind to new ways about both thinking and programming. The chapter on estimation blew my mind.
  • Atlas Shrugged by Ayn Rand
    A true epic, though it's probably better to start with the Virtue of Selfishness if you want to understand her philosophy. Every time I think some of Atlas Shrugged's characters are ridiculous parodies, I meet someone like them in real life.
  • Decision at Doona by Anne McCaffrey
    I must have read this a dozen times as a child. I still remember two characters: a child who was so enamored of the catlike aliens he started wearing a tail, and a hard-nosed military type who refused to eat local food so he could not develop cravings for the foods of (or attachments to the cultures of) the worlds he visited.
  • The Belgariad by David Eddings
    A great fantasy epic, with all of the scale but none of the bad writing and pointless digressions of The Lord of the Rings. I've heard someone dismiss Eddings as "third carbon Tolkien" but, you know what? Get over yourselves. Tolkien wasn't the first person to write in the genre, and he won't be the last.
  • The Hobbit by J.R.R. Tolkien
    All of the adventure of the Lord of the Rings, but none of its flaws. The long journey through the great dark forest and the Battle of Five Armies still stick in my mind. I like this the best out of what Tolkien I've read (which includes The Hobbit, the Lord of the Rings, and the Silmarillion, and some other darn thing I can't remember).
  • The Dragon Circle by Stephen Krensky
    Loved it as a child. Still have a stuffed dragon named "Shortflight" after this book.
  • Elfquest by Wendy and Richard Pini
    Another true epic, this time a graphic novel. Resonates with me in a way that few other fantasy epics do. I have the first 20-issue series in a massive hardbound volume which is now apparently worth a shitload of money. Out of my cold dead fingers, pry it will you.
  • Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Peter Norvig
    Yes, your programming can kick ass. Let Peter show you how.
  • Reason in Human Affairs by Herbert Simon
    Helped me understand the powers and the limits of human reason, and why we need emotion to survive in this complicated world.
  • The Seven Habits of Highly Effective People by Stephen Covey
    More than anything, I appreciate this book for a few key vignettes that made me realize how important it was to understand other people and where they are coming from, and not to impose my own preconceptions upon them.
  • The Art of Fiction: A Guide for Writers and Readers by Ayn Rand
    Straight talk about fiction from one of its most effective writers. You don't have to agree with Ayn Rand's personal philosophy or even like her fiction books to learn from this book; half her examples are drawn from authors she personally doesn't agree with.
  • In the Arena by Richard Nixon
    Straight talk about surviving in politics from one of its most flawed yet effective masters. A glimpse into the workings of a brilliant mind, broken down into different sections on different aspects of life. Don't bother reading this if you feel you owe a debt to your personal political leanings to say something nasty about Richard Nixon in every sentence in which you mention him simply because Nixon did some bad things. (Note: I think that Nixon's alleged crimes are the worst of any President, because they attacked his political opponents, undermining our democracy. However, his political philosophy, once divorced from his personal paranoia, is something very important people need to understand).
What did I forget? The Bible, The Chronicles of Narnia by C.S. Lewis, Das Energi by Paul Wilson, The Celestine Prophecies by James Redfield, Jonathan Livingston Seagull by Richard Bach, One Two Three Infinity by George Gamow, The Screwtape Letters by C.S. Lewis, Unfinished Synthesis by Niles Eldredge, Neutron Star by Larry Niven, The Gods Themselves by Isaac Asimov, the collected works of Martin Gardner, Usagi Yojimbo by Stan Sakai, Albedo Anthropomorphics by Steven Galacci, and of course, Van Nostrand's Scientific Encyclopedia, the Volume Library, and before that, back in the dawn of time, the World Book Encyclopedia. Read into that list what you will.

Blogosphere, consider yourselves tagged - your turn.

-the Centaur

Renewing the Library

centaur 1

Recently I started to notice that the design of the Library is getting long in the tooth.  One friend who was a web designer commented that it looked very "old Internet".  I've watched another friend innovate on his blog design while mine was staying still.  Work on my wife's web site made me revisit some of my choices, adding a description and picture but making few other changes.  I know the site needs a redesign because I have a lot more material coming out soon, but the final trigger was when I couldn't attend a talk and looked up one of the authors to learn more about their work - I think it was Oren Etzioni - and I was struck by his straightforward site design which enabled me to quickly find out what he was working on.

SO, I'm redesigning the Library.

I'm an artist in addition to an author and researcher, so simply gutting the site and making it simpler wasn't my goal: I have specific ideas about what I want the site to look like, and I started designing a new one.  Partway through that redesign, I noticed that I was doing a fair amount of research work - examining other blogs that I admired, investigating blog widgets, investigating CSS and HTML advances, researching color theory and design principles - but not blogging any of it.  In fact, come to think of it, typically when people redesign their sites they put all their work under a bushel, trying to hide their planned change until the last possible moment, possibly exposing it to a few trusted users in beta or with an alternate link prior to springing it on the world as if freshly formed and fully new.

Well, phooey on that.  The thought process that a web designer goes through producing a web site is interesting (well, to other web site designers, anyway) and provides a valuable resource to other designers doing their work.  I wished that other people had blogged the process that they went through and the alternatives they explored, as it would help me make my own choices - but you know what?  I don't control other people.  I only control me.  And if someone else hasn't filled the gap, then it's my own responsibility to come up with something to meet my needs. 

SO, I'm going to blog the redesign of my blog.  How "meta".

There's far too much to put into a single blog entry, so I'll start off going over the thought process that led to the design in more detail, then explain my strategy.  The first thing that I did was look at other web sites that I admire.  Earlier when working on my wife's web site I found a number of beautiful looking blogs, but when I started the redesign, I started my search over, focusing on sites of artificial intelligence researchers, bloggers, writers, and artists, trying to find ones I instinctively admired with interesting ideas, features or appearances that I could steal.  Some of these included:

  • Oren Etzioni's Home Page: Quickly Present What You Are Doing
    An "old school" (not that there's anything wrong with that) web site from an academic researcher, it has an "old style nav bar" up top that quickly tells you how to find his publications.  Below that is text which points you to his research projects and most cited publications.   From this I gleaned:
    • Organize your work into logical areas
    • Make navigation between areas easy
    • Put things people want up up front
  • Rough Type by Nicholas Carr: Put Your Content Front and Center
    Featuring a straightforward design that gets you straight to his content, Rough Type also has an author blurb and a pointer to his most famous article, "Is Google Making Us Stupid?" and his book "The Big Switch" The key points I gleaned from the site:
    • Get your content out front and center
    • Tell people who you are
    • Point them to your best work
  • Vast and Infinite by Gordon Shippey: Show the Author, Try Fun Features
    Written by an old buddy from Georgia Tech, Vast and Infinite isn't that different from Rough Type.  However, he's constantly innovating, adding a site bio and author picture, tweaking his banner, adding shared items and flickr gadgets and more, whereas my blog tends to stand still.  The lessons from this:
    • Show people your picture
    • Keep your content front and center (sound familiar?)
    • Trying out new technologies generates interest in the site
  • Home Page of Jim Davies: Show the Author, Organize Your Site Logically
    Jim Davies is another academic researcher, with a much more modern site.  Like Oren Etzioni, he has a navbar, but also a large picture, a more detailed description, and links to his art, store and blog.  Unlike Oren, each area of the site seems a little more organized, without the duplicated links to publications and the odd inclusion of news articles in his personal page.  Jim takes this further by having extra blogs just for rants and links.  My takehomes were:
    • An academic site can have a modern design
    • Showing people your picture creates interest
    • Don't be afraid to segregate content into areas
  • Marvin Minsky and John McCarthy: Tell People About Your Work, and Share It
    Two of the greats in artificial intelligence have interesting sites filled with lots of content.  Both start with a description of them and their work and then continue with many, many links to their most prominent work.  Minsky puts up chapters of his most recent book; McCarthy includes a lot of narrative that gives context.  What I like:
    • Tell people what your site is about using narrative
    • Put work you are interested in front and center
    • Fill your site with lots of content
  • Greg Egan's Home Page: Fill Your Site With Lots of Content, and Share Your Research
    Greg Egan is an author I admire primarily for his novel Permutation City and his short story Dark Integers, though I have more of his books in the queue.  His site's layout is a little harder to read than some of the others, but it is filled with pointers to all of his work, to the research that he did to create the work, and applets and essays related to his work.  The takehome from this firehose is:
    • Fill your site with lots of content
    • Share the research you did on how you p roduced your work
    • Don't be afraid to promote your work by showing it to people

There was one more site that kicked this all off, which I will hold in my pocket for a minute while I talk about opinions.

Unlike Jacob Nielsen, I don't have research backing up these conclusions: they're really just guesses about what makes these site work, or, worse, just my opinions about what it is that that I like about these sites.  What's dangerous about opinions is that recent scientific work seems to indicate that they're often post-hoc explanations of our instinctive reactions, and they're often wrong.  So, to combat this tendency, I looked at other resources that specialize in information about good design of web sites to try to get information about what I "should" do.  I don't pretend I've absorbed all the information in these sites, but am simply including them to show you the kinds of things that I looked at:

  • Jacob Nielsen's UseIt.com: Make your site fast, simple and standards based
    Jacob Nielsen's site on web site usability is so simple it hurts my eyes.  I don't like to actually look at it, but I do like the ideas.  He's got a breakdown of recent news on the right and fixed web site content on the left; the idea of the breakdown is good but seems opposed to my goal to work with Western left-to-right reading.  Jacob points out that he uses no graphics because he's not a graphic designer, and that's fair; but since his site is unpleasant for me to read I only loosely follow his recommendations.  But one cool thing about his site: if I resize the browser his content stays divided more or less the way he's put it because the structure is so simple and well designed.
  • But What Are Standards?  W3C and Webmonkey
    The W3C is the official source of standards for the web like HTML and CSS, but I've always found their standards hard to read (and I've read many, many of them over the years).  The new site redesign they're testing seems to make it easier to navigate to find things like the CSS Standard, but it is still hard to read and lacking the practical, let's get started advice that I want.  Back in the early days of the web, I used Webmonkey as a source of good tutorials, but the site seems crufty and broken - trying to narrow in on the CSS tutorials got me nothing.  I have a number of offline books, however, and am a whiz at reverse-engineering web pages, so when I get to the CSS articles I will detail what I learn and what sources I use.
  • CSS in Practice: FaceFirst.us and CSS Zen Garden
    I know the designer of FaceFirst.us, a social networking site, and in exchange for me beta testing his site he turned around and gave me a tutorial on how he uses CSS in his process to ease his site design.  In short, like Nielsen, he recommends separating the "bones" of the site from the content using CSS id's and classes.  One example he showed me was the CSS Zen Garden, which has fixed content that is modified radically just by stylesheets.
  • But What Did Your Thesis Advisors Do? Ashwin Ram and Janet Kolodner
    I also dug into what Ashwin Ram, my thesis advisor, and Janet Kolodner, a member of my thesis committee and my original advisor, did with their web pages.  Both Ashwin and Janet have profile pages back at the College of Computing, but they also have richer pages elsewhere with more detailed content.  I have no intention of slavishly copying what my thesis advisors are doing, but as far as the research part of my web is concerned they're similar people solving similar problems whose solutions are worth looking at and adapting for my own use - why, yes, my Ph.D. was in the case-based reasoning tradition, why do you ask?  On that note, it occurs to me to look at other colleagues' web sites, like Michael Cox's site.

Standards, shmandards, cool sites and web lights - all well and good.  My brain exploded, however, when I saw Warren Ellis's web site (billed as a blog for mature adults, so it's occasionally NSFW - be warned).  In my mind, Warren's site had a number of great features:

  • Show the Author's Name:
    The author's name is hugely printed across the top - so you immediately know who this is, as opposed to say my dumb blog where my name is printed in 2 point type.  And Warren's domain name is also his own name plus dot com, so that he can actually show his name and site name in the same logo.
  • Keep the Text to the Left:
    The text of all the articles is corraled to the left margin so they can be PRINTED, aligned to the top of the page so it dives into the header and is immediately visible.  Almost as if Warren's site was designed knowing that the majority of the people who read the English language read it from left to right, therefore the text should appear where their eyes go.  This pattern, plus the pattern of the rest of his design, is consistent with putting the good stuff in the F-shaped heat map that typical users eyes take when scanning your page.
  • Use the Middle of the Page:
    There is a bar of links in the MIDDLE of his page, immediately to the right of the articles, which puts it close to the golden ratio of the horizontal space of his site design (as viewed on my monitor).  This "linkbar", held in place by CSS wizardry and a black magic compact with the Old Ones,  contains permanent site features that most need to be linked - message board, mailing list, comics, his novel, his agents, and his bio inline.  Think of it as sexier version of Jacob Nielsen's "Permanent Content" box.
  • Put Sparkly Things to the Far Right:
    Beyond the linkbar are all the cool fun site features like a search bar, podcasts, images and other nonsense, which are fun to look at but less important.  On my site, some of these are on the right, or even at the very bottom of the page; on other people's sites they appear on the left, distracting Western readers from the article and possibly shoving the right ends of the articles over the printable width of the page.  Ellis' contract with Cthulhu and the hellish powers of the W3C enable him to safely corral these fun elements to the right where they belong.

The linkbar was the most mindblowing thing.  It eats into the banner.  It's readily visible.  It leaves the text on the left, but it's close enough to be visible on most monitors.  The whole site is 997 pixels wide, so it will fit on a typical 2009 web screen, but if your screen is smaller, first you lose the fun sidebar, then the important linkbar, and only then do you lose the text.  Even better, since the li nkbar CSSes its way into the banner, the size of the site is controlled by the header image so it won't get wider.  So your Nielsen-style variable content is always visible on the left, and your important fixed content is always on the right, and God willing it will never get hosed by someone resizing their window.  Once I saw that, I decided I'd done enough work researching, and it was time to start redesigning.

SO my first step is to unashamedly steal Warren Ellis's linkbar.

Immediately I sent out my secret agents out to download his HTML and CSS and transport it to my secret lab so I can take it apart piece by piece until it has no secrets left.  Of course, some of Warren Ellis' choices won't work for me, so I will have to do a lot to adapt the ideas he and his team used in his site design.  And simply imitating the form of Warren's site won't be successful, any more than just making a movie just like Star Wars called Sky Battles would be immediately successful.  (Battlestar Galactica fans, take note: while I loved the show, I think it's fair to say that it took the reinvention of the show to really produce a success, which was based on making the show interesting in its own right and not copying Star Wars).

The outer form of his site is the product of his inner success - he is a popular, prolific author with a message board, mailing list and weekly online comic he uses to promote his other writing and books, which makes the prominent placement of the message board agents and books highly important in the linkbar.  Starting a message board and getting an agent won't help me.  I, in contrast, am a jack of all trades - developer, researcher, writer, artist - using this blog as a tool to force me to stop being a perfectionist, complete my work, and put it out in front of people.  So my goal is to make sure this website displays my content, prominently surfaces the areas of interest I work in, and has a few flashy features to attract attention to individual items of more permanent interest.

In upcoming articles I will detail my original constraints for the blog version of Library of Dresan and why those constraints failed as the site evolved over time, my goals for the new site design, what I think I understand about how wide to make your web pages and where to put your content (and where I got those crazy ideas) my move to the use of CSS and my attempts to make the site work well on screen, on printers and phones, my attempts to better exploit Blogger, Flickr and other web gadgets, and the work that I'm doing investigating color theory and generating the new art assets that will make up the site.

Hopefully you'll enjoy the process, and when it's done, that you'll enjoy the site more.

-the Centaur

People who can think

centaur 2
I was going to start this article by tossing up a shout out to taidoblog, andy fossett's in-depth analysis of taido, but it then occurred to me that taidoblog is only the most recent of a whole category of blogs and articles that I've only recently started to notice, and even more recently started to truly admire: people who can actually think.

The object of inquiry of andy fossett's taidoblog is taido, his (and my) chosen martial art. This alone would capture my interest, but what's always struck me is not just andy's subject, but his method. He puts deep thought into his chosen interest: he maps out the landscape of practice, critically evaluates existing opinions, formulates radical new ideas, and puts them all to the test. He's not afraid to boldly throw out bad traditions OR to slavishly follow traditions that work, at least until he has learned all he can and/or developed something better.

Big Jimmy Style is the platform of Jim Davies, a similar investigator whose chosen interest is research and science. He and I don't see eye to eye in areas like healthy eating, environmentalism and voting, but I don't personally know anyone who puts deeper thought into artificial intelligence and cognitive science research - what it is, why it's important, how it should be done, and what it's goals are. Jim regularly holds my feet to the fire in our private correspondence, and in his blog he continues the tradition of calling bullshit when he sees it and constructing frameworks that help him tackle hard problems.

The strength of Gordon Shippey's Vast and Infinite comes from his clear personal philosophy, strong scientific training and strength of character. While at this instant his blog is suffering from Movable Type's "I'm busy this month" whitescreen, Vast and Infinite is the sounding board for G'hrdun's ongoing exploration of what works in the work place, a topic of deep personal interest that he explores from a clear objectivist ethical perspective informed by his psychological knowledge, scientific training and personal experience. If you watch long enough you'll also see scientific/libertarian analysis of modern political and scientific developments.

Scott Cole's The Visual Writer has always been overwhelming to me: there are more ideas bouncing around on his site than I've ever been able to mine. For a long time I read his articles on the theory of writing stories but his philosophical articles are just as interesting. While there are some areas he and I might disagree on particular points, on the majority of writing topics he's explored more issues that I was even aware existed.

And then of course, there's Richard Feynman's blog The Smartest Man In the World. Actually, it's not, and he disliked that title, but we can only wish Feynman hadn't died before blogs came to being. In lieu of that, I can recommend The Pleasure of Finding Things Out, which, despite some people's complaints that it rehashes his other books, does a good job of putting in one place Feynman's essential thoughts about the scientific method, the importance of integrity, the difficulty of not fooling yourself.

The point of me mentioning all these people is that they're good examples of people who are thinking. They aren't just interested in things; they're actually cataloguing what they see, organizing it, judging it, evaluating it; deciding what they want to do with it and formulating opinions on it. In andy's writings in particular he goes further: he's not willing to settle just for opinions, but must go test it out to find out whether he's are full of shit or not. And at the highest level, Feynman integrates challenging his own ideas and reporting the results of his challenges into the very core of the his being - because he who sees the deepest is the man who stops to clean his lens.

That's what I want to be when I grow up.

So go check 'em out.
Because everything is interesting if you dig deeply enough.
-the Centaur

Can't find what you're looking for? Try refining your search: