... I researched the Blogger issue. Three, maybe four of the blogs I manage will work well with the new system.
This one, the one I post on the most, won't. And there's no good workaround yet, though I am looking into it.
*sigh*.
I'm pretty sure I *can* do this - keep the Library of Dresan site completely static HTML pages so that there's no software on it to hack - but the existing FTP blogging clients seem pretty niche. And using WordPress or MovableType in this mode will, as I understand it, require that I set up WordPress on my laptop or desktop and write some software to rewrite the files and FTP them up to the site. You know, the feature Blogger handled automatically for me.
*sigh*
-the Centaur
Posts tagged as “Development”
Blogger is discontinuing FTP:
-the Centaur
In evaluating the investment needed to continue supporting FTP, we have decided that we could not justify diverting further engineering resources away from building new features for all users. For that reason, we are announcing today that we will no longer support FTP publishing in Blogger after March 26, 2010. We realize that this will not necessarily be welcome news for some users, and we are committed to making the transition as seamless as possible.Looks like it's time to find a new blogging provider.
-the Centaur
If the world didn't have enough evil already, dedicated computer engineers have figured out how to put it on tap. Behold the terror that is the Lovecraftian Name Generator! Go on, click on it, see what I'm talking about.
Back? Ok, I admit, "Lolho" and "Ual'ke" aren't the scariest Lovecraftian names. But it's programmatic. You can create more than one. The current limit is 25, but by the unholy names of Anai, Bbhaaat, Bosaush, Cazagorarl, Ch-yos, H'eligthorteg, Han-dha, Ibhagugu, K'zaru, Kephoital, Mazazho, Mephangos, Mmililog,Nacharsar, Nali-yatl, Naquggo, Niquggolo, Phomasothugn, Ralellosaq, Rhub-harny, Rlakibha, Uga-urshu, Uggugakithu, Ygg-cyo and Yishotha, not even in Lovecraft's coldest visions of an indifferent universe could he have imagined you'd be able to create an entire pantheon with the click of a button!
Even worse, that limit is no doubt arbitrary, designed to protect their computing infrastructure if not the fabric of space-time. A truly evil black-hatter could use a sequence of queries to generate matched sets of Cthulukin at the upper limit of the QPS (queries-per-second) their servers could handle! Hopefully they have some kind of DoS (Denial of Shoggoths) throttling on their servers to protect humanity. If not-
the mind reels.
-the Centaur
... otherwise known as an ID10t error in the operator component.
Trying to install a new device, wasn't working, and the reason was I never applied the firmware upgrade that the instructions clearly said had to be required. Anthony's nth (7th?) law:
-the Centaur
Trying to install a new device, wasn't working, and the reason was I never applied the firmware upgrade that the instructions clearly said had to be required. Anthony's nth (7th?) law:
If you don't follow all of the instructions, you won't finish in the goal state.(*)(*) Except through dumb luck, or just possibly deep knowledge. Did I have deep knowledge in this case? No. So if you're doing voodoo, try, perhaps, following the complete recipe before you complain your zombie isn't coming back to life as advertised.
-the Centaur
PS: For those not in the know, EOM stands for "End of Message" - generally used for a subject-only email without a body, like sending someone an url:
Here's the presentation: http://short/url EOMTechnically I guess that means the EOM in the header is not an EOM, and also by corollary the PS is not a PostScript since it introduces the body of the message.
Well, then, in that case: Hello, Internets!
Some time back I received a spam email that was blank. This is understandable, actually; probably just someone trying out a list of email addresses. I also got one containing the cryptic text "podmena traffica test"; this turned out also to be a "spoofing traffic test". Now I've got a bit of comment spam, which also seemed mysterious, until I dug into it a bit. From my email:
The deleted URL is to a French eBay site, "acheter levitra" is French for "buy Levitra," which is a brand name of Vardenafil, which is, of course, a Viagra clone. So this is essentially random pseudo-English text with a "buy Viagra" link, depending on the 1% of people who click on such links and the 1% of people who buy to pay for the cost of putting this spam on my blog. Charming.
Comment reeejected.
-the Centaur
UPDATE: I got a similar post of with a less obvious spam form, targeting one of the more popular pages on my blog (can you say pooound cake?):
But the [url=XXX]TEXT[/url] pattern was a dead giveaway. A search on Google for ["[url=http://google.com]google.com[/url]"] - note that's the '[url.../url]' thing in double quotes; the outermost brackets are the syntax you use to indicate a chunk of text is a query, like [centaur] - SO anyway, a search on Google for that nonsense revealed that the exact text of that comment has appeared elsewhere. So this is just more comment spam, trying to see if comments are unmoderated here.
Comment flattering! But reeejected.
Anonymous has left a new comment on your post "Why I Write":
I can not participate now in discussion - it is very occupied. I will be released - I will necessarily express the opinion. [url=DELETED]acheter levitra[/url] This rather good idea is necessary just by the way
Publish this comment.
Reject this comment.
Moderate comments for this blog.
The deleted URL is to a French eBay site, "acheter levitra" is French for "buy Levitra," which is a brand name of Vardenafil, which is, of course, a Viagra clone. So this is essentially random pseudo-English text with a "buy Viagra" link, depending on the 1% of people who click on such links and the 1% of people who buy to pay for the cost of putting this spam on my blog. Charming.
Comment reeejected.
-the Centaur
UPDATE: I got a similar post of with a less obvious spam form, targeting one of the more popular pages on my blog (can you say pooound cake?):
"I found this site using [url=http://google.com]google.com[/url] And i want to thank you for your work. You have done really very good site. Great work, great site! Thank you! Sorry for offtopic"
But the [url=XXX]TEXT[/url] pattern was a dead giveaway. A search on Google for ["[url=http://google.com]google.com[/url]"] - note that's the '[url.../url]' thing in double quotes; the outermost brackets are the syntax you use to indicate a chunk of text is a query, like [centaur] - SO anyway, a search on Google for that nonsense revealed that the exact text of that comment has appeared elsewhere. So this is just more comment spam, trying to see if comments are unmoderated here.
Comment flattering! But reeejected.
So we have cats. Three, currently - Caesar, a rescue cat, Lenora, a shelter cat and Gabby, a stray cat - out of a lifetime population of five, including Nero, the brother of the rescue cat, who disappeared (probably eaten by coyotes), and Graycat, another stray cat, pictured, who we unfortunately had to have executed by the state (because only I could handle him, using gloves, and we were afraid he was going to come knife us in our sleep).
So the three remaining cats are somewhat finicky. There are foods they will love, foods they will grudgingly eat, food they will eat but puke up, and food they will (quite literally) try to bury as if it is crap. So I've been meaning for a long time to keep up a diary of the food choices and their reactions to find out what we can feed them.
Data mining researchers claim that getting high-quality input data is the hardest part of a machine learning problem, so I started off with some exploratory data collection in Excel. After letting (thoroughly washed!) cans pile up for a week in two bins, I entered these into a spreadsheet and started to figure out how the data should be represented. I ended up with these columns:
After collecting this data, I started to analyze it. First I sorted the data. Then I eliminated duplicates and added a Servings, AggregateRating and Average column, summing up the Ratings into the Aggregate so that if something got two +1 and one -1 rating it would get 3 Servings and a AggregateRating of 2. This I used to compute an Average, which I used to resort the table to see which brands worked best.
The problem is, this Average wasn't that meaningful. One vote for a flavor isn't as meaningful as three, because the cats aren't consistent. This is the inverse of the Law of Large Numbers: you need many ratings to generate a meaningful result in the presence of noise.
I decided to set the number of ratings I cared about at 3, based on anecdotal comments by Roger Schank, my thesis advisor's thesis advisor - who reportedly said you need to visit a restaurant three times to give it a fair rating, because a restaurant could have one off day or great day and you needed at least 3 ratings to get an idea of their consistency.
At first I decided to track this using a smoothed average, AggregateRatings / (Servings + 3), but this depressed the all-positive and all-negative scores more than I liked - that kind of smoothing function works only well if you have very large ranges of values. So I chose a simpler max-based approach of AggregateRatings / Max(Servings, 3), so that one serving would get a 33% positive or negative rating but three or more could max it out to 100% if they were consistent.
That enabled me to make some findings, but then I realized I'm an idiot. I'd picked up the smoothed average idea from Empirical Methods for Artificial Intelligence, a book any serious computer scientist should read. And I'd edited my data in the spreadsheet so I could compute that average. But what I should have been thinking about was The Pragmatic Programmer, specifically the tips Keep Knowledge In Plain Text and Use Source Control.
Why Keep Knowledge In Plain Text? The cats aren't just finicky; their tastes change, especially if you overfeed them one thing. So the date at which a cat turns on food is important. By entering it into Excel, I first had to have a computer on hand, which encouraged to let the cans pile up; so I lost both the date information and some of the rating information - a coarse grained +1/-1 rather than "Ate Instantly"/"Ate Completely"/"Left Unfinished"/"Refused or Puked Up"/"Tried to Bury". A superior strategy would have been a pen-and-paper notebook where I recorded the cans a few hours after they were eaten. This could be entered into a text file a few days later, and if it is tab or comma separated Excel could easily import it. Then, with that data, I could even have applied other techniques from Empirical Methods for Artificial Intelligence, like using a sliding time-series window to ensure I'm analyzing the cat's current tastes.
And why Use Source Control? Because I edited my Excel file, dummy, not even versioned with v1 v2 v3 like I do with documents. So I actually entered this data in two phases and some of the temporal information I could have recovered has been lost.
So I'm going to improve my procedures going forward. Nevertheless, I did get some nice preliminary data, which jibes well with the observations Sandi and I had made informally. I'm going to hold judgment until I have more data, but so far Fancy Feast is the best brand, and Cod, Sole and Shrimp Feast and Savory Salmon Feast are the winningest flavors. Newman's Own Organics and Halo Spot's Stew were the worst brands - the cats refused to even touch them - which is odd, because Newman's Own makes great human food (try Newman O's) and Halo makes great dry food the cats love.
More results as the votes continue to trickle in...
-the Centaur
So the three remaining cats are somewhat finicky. There are foods they will love, foods they will grudgingly eat, food they will eat but puke up, and food they will (quite literally) try to bury as if it is crap. So I've been meaning for a long time to keep up a diary of the food choices and their reactions to find out what we can feed them.
Data mining researchers claim that getting high-quality input data is the hardest part of a machine learning problem, so I started off with some exploratory data collection in Excel. After letting (thoroughly washed!) cans pile up for a week in two bins, I entered these into a spreadsheet and started to figure out how the data should be represented. I ended up with these columns:
- Brand: Fancy Feast, Nutro, etc.
- Type: Regular, Max Cat Gourmet Classics, etc.
- Flavor: Savory Salmon Feast, White Meat Chicken Florentine with Garden Greens, etc.
- Consistency: Flaked, Pate, Grilled, etc.
- Target: Adult or Kitten
- Package: Can, Tray or Packet
- Ratings: +1 or -1
After collecting this data, I started to analyze it. First I sorted the data. Then I eliminated duplicates and added a Servings, AggregateRating and Average column, summing up the Ratings into the Aggregate so that if something got two +1 and one -1 rating it would get 3 Servings and a AggregateRating of 2. This I used to compute an Average, which I used to resort the table to see which brands worked best.
The problem is, this Average wasn't that meaningful. One vote for a flavor isn't as meaningful as three, because the cats aren't consistent. This is the inverse of the Law of Large Numbers: you need many ratings to generate a meaningful result in the presence of noise.
I decided to set the number of ratings I cared about at 3, based on anecdotal comments by Roger Schank, my thesis advisor's thesis advisor - who reportedly said you need to visit a restaurant three times to give it a fair rating, because a restaurant could have one off day or great day and you needed at least 3 ratings to get an idea of their consistency.
At first I decided to track this using a smoothed average, AggregateRatings / (Servings + 3), but this depressed the all-positive and all-negative scores more than I liked - that kind of smoothing function works only well if you have very large ranges of values. So I chose a simpler max-based approach of AggregateRatings / Max(Servings, 3), so that one serving would get a 33% positive or negative rating but three or more could max it out to 100% if they were consistent.
That enabled me to make some findings, but then I realized I'm an idiot. I'd picked up the smoothed average idea from Empirical Methods for Artificial Intelligence, a book any serious computer scientist should read. And I'd edited my data in the spreadsheet so I could compute that average. But what I should have been thinking about was The Pragmatic Programmer, specifically the tips Keep Knowledge In Plain Text and Use Source Control.
Why Keep Knowledge In Plain Text? The cats aren't just finicky; their tastes change, especially if you overfeed them one thing. So the date at which a cat turns on food is important. By entering it into Excel, I first had to have a computer on hand, which encouraged to let the cans pile up; so I lost both the date information and some of the rating information - a coarse grained +1/-1 rather than "Ate Instantly"/"Ate Completely"/"Left Unfinished"/"Refused or Puked Up"/"Tried to Bury". A superior strategy would have been a pen-and-paper notebook where I recorded the cans a few hours after they were eaten. This could be entered into a text file a few days later, and if it is tab or comma separated Excel could easily import it. Then, with that data, I could even have applied other techniques from Empirical Methods for Artificial Intelligence, like using a sliding time-series window to ensure I'm analyzing the cat's current tastes.
And why Use Source Control? Because I edited my Excel file, dummy, not even versioned with v1 v2 v3 like I do with documents. So I actually entered this data in two phases and some of the temporal information I could have recovered has been lost.
So I'm going to improve my procedures going forward. Nevertheless, I did get some nice preliminary data, which jibes well with the observations Sandi and I had made informally. I'm going to hold judgment until I have more data, but so far Fancy Feast is the best brand, and Cod, Sole and Shrimp Feast and Savory Salmon Feast are the winningest flavors. Newman's Own Organics and Halo Spot's Stew were the worst brands - the cats refused to even touch them - which is odd, because Newman's Own makes great human food (try Newman O's) and Halo makes great dry food the cats love.
More results as the votes continue to trickle in...
-the Centaur
Recently, work was getting more hectic and family matters required more travel, and I was getting frustrated going out to coffeehouses and delis just to get wireless Internet. So, signed up for an AT&T; wireless data plan, complete with a little dongle by Sierra Wireless that actually does the job of connecting to the Internet.
I refused to get the two year contract just so I could get the dongle for free, because I was burned this time last year getting a two year contract just so I could get a cheap smartphone ... right before the Search Engine That Starts With A G bought all of its employees Android smartphones.
SO I opted to instead to pay month to month, and as a consequence I had to pony up two hundred dollars for that little dongle. Because of the monthly fee, and how the math worked out, I stupidly did not spend the extra $5 bucks a month insuring the damn thing.
I say stupidly, because I left it in my pocket and put my pants in the wash.
My heart fell when I saw the cap of the dongle tumble out as I was emptying the clotheswasher. Sure enough, I found the dongle in the pocket of a pair of pants. Sadly, I took it to my Mac and plugged it in. The lights flickered for a moment, but did not come on. Just to be sure, since the Mac's two USB ports are not equivalent, I switched it to the other side.
The lights flickered ... and then the power light turned blue, while the connection light turned red. Hoping against hope, I hit "Connect" on the Sierra Wireless Watcher control panel. The connection light began flashing ... and a minute later, it connected.
Since I knew that liquid in electronic devices can sometimes cause problems down the road, I disconnected it, unplugged it, and put it front of a spaceheater to dry out more thoroughly, then a fan to cool it off. One day later, I'm writing this blog entry using this same dongle, and it's doing fine.
Go Sierra, and go AT&T; for picking a quality parts supplier.
-the Centaur
...it happens.
Fortunately, the problem had a quick fix.
In the top half of the picture, you see Firefox 3.0.14 on the Macintosh running the (corrected) version of the Library of Dresan home page. In the bottom half, you see Internet Explorer 7 on Windows Vista, running in a VMWare partition on my Mac, from approximately the same position on the same page. From a graphical and typographical perspective they're both doing a fairly creditable job of rendering the layout. Everything looks roughly OK.
However, they're not doing the same job interpreting the width of the layout. I haven't debugged the precise problem in detail - this is a voodoo quick fix - but essentially Internet Explorer interprets the widths of the columns and their spacing and padding different than Firefox. The result: images, which on Library of Dresan blogposts are always a maximum of 600 pixels wide, roll over the end of the column, also 600 pixels wide, making it jog out. You can see that in the stairstep on the second half of the image.
Part of this is my error; prior to my quick fix all browsers were showing at least slight stairstepping. But all browsers I tried - Firefox on the Mac, on Windows, and on Linux; Chrome on the Mac, on Windows, and on Linux; and even Safari on Mac and Windows - handled this correctly except IE, which widened the whole column. This made the three columns wider than the whole width of the container, and the third column had to jog halfway down the page so that it could fit, effectively becoming invisible to people just entering the site, unless they were willing to scroll a lot in the hope hidden features would leap up at them.
Now, I could have dug into CSS manuals and tried to fix this the "right" way, and indeed I plan to. However, there was a quicker, better, way: experimentation. Before I even knew for sure what was the problem I browsed to the front page of the Library on a Windows machine, downloaded the page to a local HTML file, and started hacking out parts of the file until something changed. I was very quickly able to show that there was nothing wrong with the right column itself; even reduced to a few lines and an image it wasn't showing up.
So I then went to my test file, research, which I had gotten to work in IE before I launched the style change to the whole library. One difference between that page and the broken page I immediately noted was the fact that the images were smaller; then I started to suspect the stairstepping phenomenon. That needed a fix on all browsers, so I simply made the content column slightly wider - from 600 to 610 pixels, fixing a gaffe I shouldn't have made in the first place - and widened the overall page from 1000 to 1024 pixels.
The result: it worked, in all browsers I have available to me right now. And, because my buddy Nathan had impressed upon me the importance of using CSS stylesheets, I was able to push the fix by simply uploading the revised stylesheet to the Library and reloading the page.
Shouldn't have happened - I shouldn't have made the column too narrow, Internet Explorer shouldn't be misinterpreting the white space, I shouldn't have pushed the template without testing it on Internet Explorer, and Blogger should have a better preview function so I could have tested it successfully offline without pushing it to the entire blog. But a quick fix was possible, because I used reasonably good site design practices, the scientific method, and a healthy supply of beans and vinegar.
-the Centaur
The problem, for those of you who browse the site in standards-compliant browsers, was that the last column of the Library's three-column layout was not showing up in Internet Explorer - and only Internet Exporer, one of the world's most popular and, unfortunately, least standards compliant browsers.
The solution: make the layout wider, so the max image width used in the blog does not cause the first column to widen.
In the top half of the picture, you see Firefox 3.0.14 on the Macintosh running the (corrected) version of the Library of Dresan home page. In the bottom half, you see Internet Explorer 7 on Windows Vista, running in a VMWare partition on my Mac, from approximately the same position on the same page. From a graphical and typographical perspective they're both doing a fairly creditable job of rendering the layout. Everything looks roughly OK.
However, they're not doing the same job interpreting the width of the layout. I haven't debugged the precise problem in detail - this is a voodoo quick fix - but essentially Internet Explorer interprets the widths of the columns and their spacing and padding different than Firefox. The result: images, which on Library of Dresan blogposts are always a maximum of 600 pixels wide, roll over the end of the column, also 600 pixels wide, making it jog out. You can see that in the stairstep on the second half of the image.
Part of this is my error; prior to my quick fix all browsers were showing at least slight stairstepping. But all browsers I tried - Firefox on the Mac, on Windows, and on Linux; Chrome on the Mac, on Windows, and on Linux; and even Safari on Mac and Windows - handled this correctly except IE, which widened the whole column. This made the three columns wider than the whole width of the container, and the third column had to jog halfway down the page so that it could fit, effectively becoming invisible to people just entering the site, unless they were willing to scroll a lot in the hope hidden features would leap up at them.
Now, I could have dug into CSS manuals and tried to fix this the "right" way, and indeed I plan to. However, there was a quicker, better, way: experimentation. Before I even knew for sure what was the problem I browsed to the front page of the Library on a Windows machine, downloaded the page to a local HTML file, and started hacking out parts of the file until something changed. I was very quickly able to show that there was nothing wrong with the right column itself; even reduced to a few lines and an image it wasn't showing up.
So I then went to my test file, research, which I had gotten to work in IE before I launched the style change to the whole library. One difference between that page and the broken page I immediately noted was the fact that the images were smaller; then I started to suspect the stairstepping phenomenon. That needed a fix on all browsers, so I simply made the content column slightly wider - from 600 to 610 pixels, fixing a gaffe I shouldn't have made in the first place - and widened the overall page from 1000 to 1024 pixels.
The result: it worked, in all browsers I have available to me right now. And, because my buddy Nathan had impressed upon me the importance of using CSS stylesheets, I was able to push the fix by simply uploading the revised stylesheet to the Library and reloading the page.
Shouldn't have happened - I shouldn't have made the column too narrow, Internet Explorer shouldn't be misinterpreting the white space, I shouldn't have pushed the template without testing it on Internet Explorer, and Blogger should have a better preview function so I could have tested it successfully offline without pushing it to the entire blog. But a quick fix was possible, because I used reasonably good site design practices, the scientific method, and a healthy supply of beans and vinegar.
-the Centaur
...the template looks fine in Firefox!
Grrr...
UPDATE: The template looks as intended in Firefox AND Chrome for Mac AND for Windows AND for Linux, and for Safari for Windows and Mac as well. Grrr...
Grrr...
UPDATE: The template looks as intended in Firefox AND Chrome for Mac AND for Windows AND for Linux, and for Safari for Windows and Mac as well. Grrr...
Please pardon the dust, but I am doing that long-threatened major overhaul of the Library's templates.
I realized that I was waiting until the overhaul was "perfect" and that was putting the overhaul on hold. I've read too many things recently - about the telegraph, the transcontinental railroad, even about creation of Google - in which immense success came from plucky people who didn't wait until things were perfect, or even necessarily known to be possible, before they threw their ideas up on the wall to see if they stuck.
So, I know my new template is not done, but it looks better than what I had before, and more importantly is more navigable. More work to do ... but for now, complain, and I'll fix it.
-the Centaur
I realized that I was waiting until the overhaul was "perfect" and that was putting the overhaul on hold. I've read too many things recently - about the telegraph, the transcontinental railroad, even about creation of Google - in which immense success came from plucky people who didn't wait until things were perfect, or even necessarily known to be possible, before they threw their ideas up on the wall to see if they stuck.
So, I know my new template is not done, but it looks better than what I had before, and more importantly is more navigable. More work to do ... but for now, complain, and I'll fix it.
-the Centaur
I can be such an idiot sometimes ... or, put in other words, the right way to solve a problem is often much, much easier than the wrong way.
For example, if you're doing woodworking, you may use a modern steel clamp to hold a part tight to work on it. That sounds good and does the job. Of course, when you need to change the position of the part you must unscrew it, reposition the part and rescrew the clamp.
So far, so good ... but, according to David Petersen, the author of Mouse Guard, there is a better way. Petersen researched medieval woodworking equipment for his Eisner-award winning comic and found there was a simpler scheme involving a foot pedal and a lever, which had equal gripping power but could release and reapply pressure in seconds just by lifting your foot.
Moral: newer and more complex is not always better.
Fast forward eight hundred and fifty years. Robert Kroese, a colleague at the Search Engine That Starts With A G, has his own book that he's working on, and an associated web site Mercury Falls. On that site he has a form to enter an email list, and I thought, what a great idea! I should have a form where someone can send me an email list on the Dakota Frost site.
So I started looking into it. To make the form work, you need not only a web form, which is easy to set up, but also some kind of server program on the back end which can accept the results of the form and a database to store it.
Historically, I've had bad luck with scripts and databases on my web sites: Earthlink / Mindspring basically welched on the scripting features of their web hosting that I was paying for, and my next provider, Tophosting, screwed up one of my databases.
So I was hesitant, and I started thinking. Then it hit me...
... there was a simpler way.
Instead of creating a form and the backend plumbing that goes with it, I should use the existing plumbing I had to achieve the same effect. What plumbing was already in place? A web site, a hosting provider, an ability to forward emails to a given address ... and a mail client with filters.
To make this work, I went to the GoDaddy control panel for Dakota Frost and set up a forwarding email: contact at dakota frost dot com. I had that sent to one of my catchall email accounts, and in Gmail I then set up a filter which collected all those email addresses into a single folder. Bam: problem solved.
Even if I want to do something more complex, this solution still works, as long as I keep looking at simple tools that are already available. For example, if I want an official email address list as a separate file, I could always download those email messages to the mail client of my choice, filter the messages to a folder, and grep over the email addresses in the file. For the scale at which I need to do it right now, the problem is still solved.
Moral of the story: the more you overthink the plumbing, the easier it is to stop up the drain. Keep it simple, and things should just keep flowing without effort.
Or, to translate this back into development speak: there are two kinds of solutions: solutions which are easy to think up, but take a lot of coding effort to make work, and solutions which require thought, but which can be implemented in staggeringly small amounts of code.
In this one, we have an extreme example: to make this problem work the "no thinking way" would require an HTML form, a CGI script, a database, and considerable configuration on the server side of my hosting provider. To make this problem work the "no effort way" required some thought, but in the end less configuration of my hosting provider and a few minutes setting up some email filters.
You see the same thing in software libraries: really good libraries don't take a lot of code, but that doesn't mean that they didn't take a lot of work. What happened behind the scenes was a lot of thought, wherein the library author searched the space of possible designs until he found a great one before ever publishing the library. You as the consumer don't see that effort, no matter how short or long it took: you only see the pure, parsimonious, elegant efficient piece of code that remains.
If you don't put thought into what you're doing, you might try it sometime. You'd be surprised how little thought can get you substantially improved results.
-the Centaur
I have fond memories of many old Norton products ... the Norton Utilities, Norton Desktop, even Norton Antivirus in its early Symantec incarnations. But somewhere around the time that Norton/Symantec introduced Product Activation, things turned sour.
Not that that's the problem - even though I once had to pay for Symantec's Antivirus suite twice on my wife's computer because of an unresolvable error with the antivirus subscription. The problem wasn't so much the Product Activation per se, but that the software got into an unstable state which prevented it from accepting the subscription - which I could prove that I paid for - and eventually the only way to fix it was to nuke the site from orbit, reinstall everything, and pay again.
And therein lies the kernel of the problem: it's so easy for Symantec (née Norton) products to get into an unstable state, activation or no. There's the antivirus issue I mentioned. I once installed Norton Antivirus on a PC with Zone Alarm installed, and the two products got into a death match over which one was the "real" firewall even though I was not trying to install Norton's firewall features. There have been several other instances, most with Norton 360 Premier Edition, and now this:
You can't see it in the picture, but Norton 360 is frozen like a Canadian lake in winter. Recently our main backup drive for our Windows workstation died, and I replaced the old Maxtor with a larger Iomega drive. However, when I went to change the backup to point to the new drive, Norton locked up trying to determine ... what, I don't know. Files to back up? Looking for backup locations? It isn't clear. On the first try of this, it appeared to be frozen checking backup schedules:
It stayed there the whole time I was working on this article (up to this point). Right around the time I wrote that sentence, I finally killed Norton and restarted ... no dice. Now it can't even find the backup locations:
There is no excuse for software to be written this way by a professional company with collectively over 30 years experience. This is the kind of crap I write the very first time I whip together a utility for a new operating system, before I learn where the blocking calls are. A program should never block on a dialog finding something as simple as a list of backup schedules, much less files or anything else. Modern computers have millions of cycles a second available to realize a call is taking a long time, present the list of items found so far, and give the user the opportunity to do something - which, in this case, would be me telling it to forget the old backup location and to try the new one. Instead, I get this, still frozen trying to find a list that could be easily cached, interpolated, discarded, supplanted, SOMETHING:
This goes to my overall rant on what's wrong with disk and networking software. Modern web applications like GMail have vast abilities to cope when servers are offline. Networking and disk operations, in contrast, are either blindingly fast, or pause for minutes or even hours, obviously befuddled but never bothering to pass that information on to the user. Someone, I can't remember who, wrote an article about this a few years back, pointing out that it was all related to design decisions we'd made early on in computing that are wrong. He sketched out how you could design a computer to never effectively lose data, even if you powercycled in the middle of writing an essay, by changing how we think about saving data. I'll dig up the essay, but for right now, we STILL have THIS, frozen in the same place:
At the time of this writing I've spent almost THIRTY MINUTES waiting on Norton to perform what should have been a two minute operation: changing a backup disk and starting the new backup. This makes my problems with Apple's Time Capsule look trivial. By the way ... Time Capsule is working perfectly now. Time to switch my wife to the Mac?
-the Centaur
Postscript: we went and worked out, and this window was still up, an hour and a half later. I took the pictures I needed for this article; then, I did what I hate to do: asked my wife to log out (in her session, she was working on proposals and had half a dozen windows open) and rebooted the machine. When we returned, Norton worked just fine. I started this article feeling nostalgic for Norton/Symantec's older products; well, Norton gave me what I wanted, and took me all the way back to 1995, when you had to reboot to do anything.
It's all right, Peter. We still love you. This isn't your fault, nor is it necessarily something that the hardworking people at Symantec could have fixed in this instance. But if I don't complain, you'll never know anything was wrong.
Not that that's the problem - even though I once had to pay for Symantec's Antivirus suite twice on my wife's computer because of an unresolvable error with the antivirus subscription. The problem wasn't so much the Product Activation per se, but that the software got into an unstable state which prevented it from accepting the subscription - which I could prove that I paid for - and eventually the only way to fix it was to nuke the site from orbit, reinstall everything, and pay again.
And therein lies the kernel of the problem: it's so easy for Symantec (née Norton) products to get into an unstable state, activation or no. There's the antivirus issue I mentioned. I once installed Norton Antivirus on a PC with Zone Alarm installed, and the two products got into a death match over which one was the "real" firewall even though I was not trying to install Norton's firewall features. There have been several other instances, most with Norton 360 Premier Edition, and now this:
You can't see it in the picture, but Norton 360 is frozen like a Canadian lake in winter. Recently our main backup drive for our Windows workstation died, and I replaced the old Maxtor with a larger Iomega drive. However, when I went to change the backup to point to the new drive, Norton locked up trying to determine ... what, I don't know. Files to back up? Looking for backup locations? It isn't clear. On the first try of this, it appeared to be frozen checking backup schedules:
It stayed there the whole time I was working on this article (up to this point). Right around the time I wrote that sentence, I finally killed Norton and restarted ... no dice. Now it can't even find the backup locations:
There is no excuse for software to be written this way by a professional company with collectively over 30 years experience. This is the kind of crap I write the very first time I whip together a utility for a new operating system, before I learn where the blocking calls are. A program should never block on a dialog finding something as simple as a list of backup schedules, much less files or anything else. Modern computers have millions of cycles a second available to realize a call is taking a long time, present the list of items found so far, and give the user the opportunity to do something - which, in this case, would be me telling it to forget the old backup location and to try the new one. Instead, I get this, still frozen trying to find a list that could be easily cached, interpolated, discarded, supplanted, SOMETHING:
This goes to my overall rant on what's wrong with disk and networking software. Modern web applications like GMail have vast abilities to cope when servers are offline. Networking and disk operations, in contrast, are either blindingly fast, or pause for minutes or even hours, obviously befuddled but never bothering to pass that information on to the user. Someone, I can't remember who, wrote an article about this a few years back, pointing out that it was all related to design decisions we'd made early on in computing that are wrong. He sketched out how you could design a computer to never effectively lose data, even if you powercycled in the middle of writing an essay, by changing how we think about saving data. I'll dig up the essay, but for right now, we STILL have THIS, frozen in the same place:
At the time of this writing I've spent almost THIRTY MINUTES waiting on Norton to perform what should have been a two minute operation: changing a backup disk and starting the new backup. This makes my problems with Apple's Time Capsule look trivial. By the way ... Time Capsule is working perfectly now. Time to switch my wife to the Mac?
-the Centaur
Postscript: we went and worked out, and this window was still up, an hour and a half later. I took the pictures I needed for this article; then, I did what I hate to do: asked my wife to log out (in her session, she was working on proposals and had half a dozen windows open) and rebooted the machine. When we returned, Norton worked just fine. I started this article feeling nostalgic for Norton/Symantec's older products; well, Norton gave me what I wanted, and took me all the way back to 1995, when you had to reboot to do anything.
It's all right, Peter. We still love you. This isn't your fault, nor is it necessarily something that the hardworking people at Symantec could have fixed in this instance. But if I don't complain, you'll never know anything was wrong.
"Now that's what we call a computer crash..."
[youtube=http://www.youtube.com/watch?v=ZXUIlULqUGQ]
More seriously, this is why when you really want to film something you need two or three different cameras. This really cried out for three: one closeup on the computer, one long shot on the shooting range to see it fly in the air, and one on the shooters.
-the Centaur
UPDATE: I had a discussion with friends, and there are at least two things the people in this video are doing that make them a hazard to themselves and others:
[youtube=http://www.youtube.com/watch?v=ZXUIlULqUGQ]
More seriously, this is why when you really want to film something you need two or three different cameras. This really cried out for three: one closeup on the computer, one long shot on the shooting range to see it fly in the air, and one on the shooters.
-the Centaur
UPDATE: I had a discussion with friends, and there are at least two things the people in this video are doing that make them a hazard to themselves and others:
- They're TRAP shooting with RIFLES!
From one friend: "This will probably surprise everyone, but in my opinion these guys are complete morons because they are endangering others. They are "trap shooting" with rifles! I think I saw one shotgun in the whole video. I'm sure my gun enthusiast friends will agree with me that unless these guys are at least 3 miles from any other people (and even in the deep woods of Tennessee, you can't possibly be sure of that) they are endangering others by firing high-powered rifles into the air. As an example, a 30-06 rifle aimed at a high elevation can fire a round about 2.5 miles. Interestingly, the maximum range occurs at about 35 degrees elevation, not 45 degrees as one might think. When the round returns to earth, it's still moving at around 500 fps, which is fast enough to kill someone." - They have NO IDEA of EXPLOSIVE SAFETY:
After reading that, I remembered something else bugging me and I went back and found it. Watch the video again closely for the following gem around 1 minute in: The guy fills the test chamber with explosive and a fuse, he tamps it in with a stick and wooden hammer, then he puts his body over the chamber when putting the books on it. Now, the first time that I watched this, I thought he tapped the whole wooden shaft into the hole, but you can see it lying on the ground later. Regardless, he's putting himself in the line of fire with no thought of what might go wrong. From the other poster: "Yeah, I noticed that one too. I bet if it blew and tossed him into the air, his buddies would instinctively start firing until the smoke cleared and they realized it was him!"
I certainly would agree about the trap shooting and with the care needed with black powder and fuses. There is no way to know, of course, but the woods in the background look pretty dense. If it's all private property it could go for miles. Still I wouldn't do that stuff with my rifles.Ok, it's all fun until someone loses a loved one. Be safe, all.
Earlier I blogged about how to succeed at work or life you need to work just a little bit more than you want to. I mean that 'little bit' literally: not working yourself to death more, not a whole lot more, just that little bit more that can turn your day from one of frustration and failure into one with a concrete achievement.
Your mileage may vary, of course, but for me the point when I really want to give up is frequently just before I am about to reach one of my goals. All I need to do is hang on just a little bit longer, keep working just a little bit harder, and very frequently I'm rewarded by more than I could have expected.
Today this was once again confirmed. I got in late today and decided to work until 7, which was coincidentally what I felt was a good solid workday and about the time I would need to leave to make sure I can get some dinner and writing done.
But work was slow going: I'd recently switched to a new project but was stuck with some old tasks, and the mental gear switching, combined with some syrupy new software on my workstation, kept dragging me down. On top of that, one of my collaborators dropped in with a request for assistance putting together an evaluation, and since I owe him a few I worked on a scripting job for him while I was between compiles of the unit tests of my main task for the day.
7 rolls around, and I'm just about spent. I decide to call it a day, start to pack things up, and begin thinking of where I can go for dinner and what I need to be working on: my new novel, an illustration for my last novel, my web site.
And then I remember that blog post, and decide to push just a little bit harder.
In just 23 minutes, I got both the unit tests to pass on my main task AND finished a first trial run of the scripting job, complete with an automatically generated HTML page. With that, I was able to find a 'problem' with my script, spent about 20 more minutes debugging it, verified it wasn't really my script's problem, and fired off an email to my colleague telling him where to find the HTML for his evaluation, and asking him had he ever seen an error like that and did he happen to know how to fix it?
By 7:45, I'd closed up, walked out, and headed for Panera Bread. By the time I was done with my sandwich, I'd gotten an email back from my collaborator suggesting an easy workaround for the problem that I can implement with a one line change. I might even be able to start it up tonight to run overnight - meaning that, God willing, I will have completed by Tuesday morning a task I told my collaborator I couldn't even start until maybe Wednesday.
YES! By working just a little bit harder, I turned a frustrating day into a complete success - and freed my mind this evening to work on more creative tasks. I recommend it to all of you.
-the Centaur
I easily could spend 8 hours a day blogging. There's just too much to write about; I don't know how people like Andrew Sullivan and Warren Ellis write so much. No, wait, I do: they're paid to write, dang it, where I am paid to make search engines smell better and must squeeze my writing in around the margins.
Recently I started work on redesigning the templates for the Library, and in my giant Mongo death Todo list I have an entry "blog updates to library". But I never got around to writing the article, because I kept on getting confused about what to write first.
Then I realized that's part of my problem. The point of blogging the redesign of the Library was to expose the thought process that normally goes into the redesign of any web site, rather than hiding all of the hard work behind the covers, springing it fully formed onto the world, and proclaiming: "See! Doesn't it smell better?"
So here's the thought process that was blocking me from writing articles on the Library:
But the point of this blogging exercise is NOT to go off and hide and try to figure these things out, then come back smiling with a solution. Instead, when I get stumped, that is a serious decision point in the development process and I'm SUPPOSED to write an article which says, here's what's on my plate, and boy did I get stumped.
So this is that article. And just articulating the things going through my mind gave me a sequence of things to do: now I can blog each of the elements on that list and show how I encountered the problem, how I tackled it, and how I got to a solution.
-Anthony
Recently I started work on redesigning the templates for the Library, and in my giant Mongo death Todo list I have an entry "blog updates to library". But I never got around to writing the article, because I kept on getting confused about what to write first.
Then I realized that's part of my problem. The point of blogging the redesign of the Library was to expose the thought process that normally goes into the redesign of any web site, rather than hiding all of the hard work behind the covers, springing it fully formed onto the world, and proclaiming: "See! Doesn't it smell better?"
So here's the thought process that was blocking me from writing articles on the Library:
- Anthony looks at Todo list, sees entry "Blog Update" and tries to figure out what to do with this horribly underspecified action item with no clear next action. Somewhere out in cyberspace, David Allen kills himself, then spins in his grave.
- Anthony decides "I've got a prototype for the new design of Library now! I just need to post the darn thing and get on with it!"
- Anthony starts work on cleaning up his Blogger template. During this process he finds he needs to figure out precisely what his Blogger template is doing, as he no longer remembers and the code is poorly documented.
- Anthony comes up with a clever way of visualizing how his Blogger template works which itself is probably worth blogging about.
- Then Anthony realizes that he doesn't know whether the design works well with Internet Explorer on Windows, or Chrome, or on small screens (notwithstanding my desire to support only large screens), or on super large desktop screens with different sized fonts.
- This leads to more questions: What browsers should this work well on? How should I test this? What if there are fundamental incompatibilities between IE and Firefox?
- Well, shazbot. I decide, screw it, let's just fix a small page somewhere and update that. So I update the Research page, which already needed an overhaul of its research statement.
- Anthony finds a system to help him test and prototype his content which is worthy of blogging about in its own right.
- The textual update goes swimmingly, but updating the CSS and HTML proves more of a bear, especially comparing Internet Explorer and Firefox.
- Anthony's system for updating the content starts to show failures which are worthy of blogging about in their own right.
- Well, shoot, now what do I do?
But the point of this blogging exercise is NOT to go off and hide and try to figure these things out, then come back smiling with a solution. Instead, when I get stumped, that is a serious decision point in the development process and I'm SUPPOSED to write an article which says, here's what's on my plate, and boy did I get stumped.
So this is that article. And just articulating the things going through my mind gave me a sequence of things to do: now I can blog each of the elements on that list and show how I encountered the problem, how I tackled it, and how I got to a solution.
-Anthony
Recently I've been getting a lot of pointless "spam" with a reasonable sounding subject line but a body that only says "podmena traffica test". Mysterious, and pointless, from a spam perspective; so I assumed it was some automatic program testing a variety of addresses to see which ones bounced.
Finally I decided to track it down, and while I don't know for sure I've now heard a good hypothesis:
Trying to verify his logic: Romanizing "podmena traffica test" gets me "подмена траффица тест", as predicted, and translating that back to English gets "substitution traffitsa test" which is close enough.
The specifics of the message I'm seeing don't match the description in that blog post, but it's enough to make me think that the author has nailed it: it's a Russian spammer testing out addresses and more importantly web servers.
Mystery solved! Now quit it, spammer guys.
-the Centaur
Update: I keep getting this spam. I have now received this spam almost 60 times in the last month, according to Gmail.
Finally I decided to track it down, and while I don't know for sure I've now heard a good hypothesis:
There seem to be some strange spam emails doing the rounds, with a body text of "podmena traffica test".. what gives? It makes a bit more sense if you transliterate it into Cyrillic, which leaves you with a Russlish phrase "подмена трафика тест" and that simply translates as "spoofing traffic test".
Trying to verify his logic: Romanizing "podmena traffica test" gets me "подмена траффица тест", as predicted, and translating that back to English gets "substitution traffitsa test" which is close enough.
The specifics of the message I'm seeing don't match the description in that blog post, but it's enough to make me think that the author has nailed it: it's a Russian spammer testing out addresses and more importantly web servers.
Mystery solved! Now quit it, spammer guys.
-the Centaur
Update: I keep getting this spam. I have now received this spam almost 60 times in the last month, according to Gmail.
I use transparent windows for my terminals, so you can see beneath the text to other windows or desktop backgrounds beneath. I've done it for a long time on Linux as just a neat trick to show the desktop beneath whatever I'm typing, but it works better if you can see the actual window beneath the terminal. On Windows you can do this with various add-ons (don't remember which one worked best, so sorry no link), but it works well natively on the Mac and recently with Compiz I've gotten it to work the way I want on on Linux.
Combined with the microscopic fonts I like, this makes my screen hard to read for others; one of my collaborators used to insist I make the windows opaque and increase the font size so he could see them. So why do I do this? Even the Mac OS X tips page that tells you how says it "has no serious purpose" except to make your windows look pretty.
Well, I beg to differ. This screenshot shows why:
Here, I'm working on some Python code to automatically generate a list of labels for my web site. I've never used the Python ftp library before ... so I just Google'd the Python ftp protocol, found the Python doc page, and began prototyping my code straight at the Python prompt, looking through the terminal window to see the sample code beneath it.
Mmmm. Composity goodness, captured via Mac's Command-Shift-3 screenshot keystroke and edited with Preview. If you program at the command line you should try it - your eyes train up pretty quickly to ignore whatever's behind the terminal unless you need it.
-the Centaur
Combined with the microscopic fonts I like, this makes my screen hard to read for others; one of my collaborators used to insist I make the windows opaque and increase the font size so he could see them. So why do I do this? Even the Mac OS X tips page that tells you how says it "has no serious purpose" except to make your windows look pretty.
Well, I beg to differ. This screenshot shows why:
Here, I'm working on some Python code to automatically generate a list of labels for my web site. I've never used the Python ftp library before ... so I just Google'd the Python ftp protocol, found the Python doc page, and began prototyping my code straight at the Python prompt, looking through the terminal window to see the sample code beneath it.
Mmmm. Composity goodness, captured via Mac's Command-Shift-3 screenshot keystroke and edited with Preview. If you program at the command line you should try it - your eyes train up pretty quickly to ignore whatever's behind the terminal unless you need it.
-the Centaur
For the longest time I've tried to design the websites I'm responsible for to be pretty narrow - the Library of Dresan is supposed to be just over 800 pixels, fanu fiku is supposed to be just over 700 pixels, and Studio Sandi is just about 800 pixels. The rationale behind this was that in the olden days of the Internet, user's screens were as small as 1024 by 768 or even 800 by 600, and even users who had wider monitors displayed their browser in a window that didn't take up the full screen.
I remember reading an article (I don't remember where) that pointed out with browser sidebars and chrome, the width of the page could be far less than monitor width. I measured it on my circa-2000 screen and I found that I had about 800 pixels of width for the web page. So that led to the design of the Library: 800 pixels of width, 600 for the main content and 200 for the sidebar. The banner itself was a little over 1000 pixels so that it didn't end abruptly if the user made their screen wider.
But that was almost ten years ago. Does that logic still hold?
Many people view the web on laptops and phones. Dealing with phone resolution will require more than just dealing with screen widths, so I'll return to it in a later article when I tackle the CSSification of the Library. But a quick search suggests that typical laptop screen widths range from the 1024x768 XGA standard to the 1440x900 WXGA+ widescreen standard. There are some people who have smaller laptop screens, of course, but they are in the minority. Conversely, screens do get larger: for example, for many years I owned a glorious Toshiba Satellite laptop with a 1600x1200 screen. But on those larger screens users often use smaller windows for their browsers: for example, on this MacBook Pro, with a 1440x900 screen, I'm only using a little more than 1200 pixels for the browser window - and typically I use narrower windows.
So something more than 800 and less than 1400 appears to be a good guess. Discussion on the web seems to indicate people are starting to give up on the 800 width and moving to 900 or more, but rarely more than 1024.
Digging around, I found more articles with the same idea - Mario Sanchez argues the goal of web site width is to avoid horizontal scrolling, and recommends you design your web site for 800 pixels, with a layout that works well at 1024. Jacob Nielsen recommends straight out to optimize your site for 1024, but not to design for a specific size and let your layout be "liquid", changing width for your users's monitor sizes. Personally I think this breaks down if you have images to display, though I reserve the right to be convinced otherwise by CSS wizardry at a later time.
All of the above are opinions, of course; what about the evidence that they're based on? The Steam Hardware Survey put out by Valve Corporation suggests that 95% of users use screens of 1024 pixels or wider, with fully 50% at 1024x768, 1280x1024, or 1440x1900. Similarly, the Browser Display Statistics analysis by W3 Schools indicate 36% of users have a display resolution of 1024x768 ... and 57% have higher. Update: I checked the Library's own stats, and found that Google Analytics does indeed track screen resolutions. Less than 5% of all users had a resolution less than 1024x768, and only 1.5% had a resolution less than 800x600. Of that, 0.5% were listed as no resolution, leaving 1% at 640x480. Those numbers will come back later...
Take all that with a grain of salt given that some significant percentage use browser screens larger than their monitor resolution - Nielsen points out in the same article I mentioned above that as resolutions get staggeringly large (he predicts 5000x3000 in the future) users begin to display multiple side by side windows. True enough, at the Search Engine That Starts With a G, all of my officemates have dual monitors with aggregate resolution of 2400x1920, but none of us typically displays a browser window larger than half the screen - 1200 pixels, minus chrome or subtracted width to see other windows underneath.
So that leaves me with the feeling that Nielsen and Sanchez are essentially right. My personal take on it for the Library is:
The new design for the Library uses around 1000 pixels, with the leftmost 600 for text (to satisfy the 1% of people who are still stuck at 640x480), the next 200 for site navigation (for the less than 5% stuck at 800x600), and the remaining 200 for everything (and everyone) else: search boxes, author pictures, and Flickr badges; in short, anything less important than the articles and navigation features. Technically this is not a "liquid" layout, but hopefully this will be something the vast number of users can enjoy with little scrolling, and something that other users can appreciate without feeling left out.
-the Centaur
I remember reading an article (I don't remember where) that pointed out with browser sidebars and chrome, the width of the page could be far less than monitor width. I measured it on my circa-2000 screen and I found that I had about 800 pixels of width for the web page. So that led to the design of the Library: 800 pixels of width, 600 for the main content and 200 for the sidebar. The banner itself was a little over 1000 pixels so that it didn't end abruptly if the user made their screen wider.
But that was almost ten years ago. Does that logic still hold?
Many people view the web on laptops and phones. Dealing with phone resolution will require more than just dealing with screen widths, so I'll return to it in a later article when I tackle the CSSification of the Library. But a quick search suggests that typical laptop screen widths range from the 1024x768 XGA standard to the 1440x900 WXGA+ widescreen standard. There are some people who have smaller laptop screens, of course, but they are in the minority. Conversely, screens do get larger: for example, for many years I owned a glorious Toshiba Satellite laptop with a 1600x1200 screen. But on those larger screens users often use smaller windows for their browsers: for example, on this MacBook Pro, with a 1440x900 screen, I'm only using a little more than 1200 pixels for the browser window - and typically I use narrower windows.
So something more than 800 and less than 1400 appears to be a good guess. Discussion on the web seems to indicate people are starting to give up on the 800 width and moving to 900 or more, but rarely more than 1024.
Digging around, I found more articles with the same idea - Mario Sanchez argues the goal of web site width is to avoid horizontal scrolling, and recommends you design your web site for 800 pixels, with a layout that works well at 1024. Jacob Nielsen recommends straight out to optimize your site for 1024, but not to design for a specific size and let your layout be "liquid", changing width for your users's monitor sizes. Personally I think this breaks down if you have images to display, though I reserve the right to be convinced otherwise by CSS wizardry at a later time.
All of the above are opinions, of course; what about the evidence that they're based on? The Steam Hardware Survey put out by Valve Corporation suggests that 95% of users use screens of 1024 pixels or wider, with fully 50% at 1024x768, 1280x1024, or 1440x1900. Similarly, the Browser Display Statistics analysis by W3 Schools indicate 36% of users have a display resolution of 1024x768 ... and 57% have higher. Update: I checked the Library's own stats, and found that Google Analytics does indeed track screen resolutions. Less than 5% of all users had a resolution less than 1024x768, and only 1.5% had a resolution less than 800x600. Of that, 0.5% were listed as no resolution, leaving 1% at 640x480. Those numbers will come back later...
Take all that with a grain of salt given that some significant percentage use browser screens larger than their monitor resolution - Nielsen points out in the same article I mentioned above that as resolutions get staggeringly large (he predicts 5000x3000 in the future) users begin to display multiple side by side windows. True enough, at the Search Engine That Starts With a G, all of my officemates have dual monitors with aggregate resolution of 2400x1920, but none of us typically displays a browser window larger than half the screen - 1200 pixels, minus chrome or subtracted width to see other windows underneath.
So that leaves me with the feeling that Nielsen and Sanchez are essentially right. My personal take on it for the Library is:
- Your website should display well in no more than 1024 pixels of width. You may use a "liquid" layout that can expand to use more space, but it should not require more than 1024 pixels to display.
- The essential content of your web site should fit into the leftmost 800 pixels of width. If you are displaying graphics or images or have a lot of site widgets, some of these features may scroll off to the right on an 800x600 screen. Don't put anything essential on the right. Your mileage may vary if you are creating a web site for right-to-left languages, of course.
- Make sure your "liquid" layouts don't break down on very wide or narrow screens. A user who displays a very wide window on a 2400 pixel wide screen should not see all your paragraphs turn into long marching lines of text - these can become hard to read. Similar problems can happen when a screen is squeezed very small - for example, Wikipedia used to display terribly on certain mobile phones, creating vast blank spaces for the user to navigate through.
The new design for the Library uses around 1000 pixels, with the leftmost 600 for text (to satisfy the 1% of people who are still stuck at 640x480), the next 200 for site navigation (for the less than 5% stuck at 800x600), and the remaining 200 for everything (and everyone) else: search boxes, author pictures, and Flickr badges; in short, anything less important than the articles and navigation features. Technically this is not a "liquid" layout, but hopefully this will be something the vast number of users can enjoy with little scrolling, and something that other users can appreciate without feeling left out.
-the Centaur