Friday, March 25, 2005
Da Stand
Ya know, as one of the head-freezing set I'm normally not one to critique bleeding-edge science. For example, I'm all in favor of the atom smashing going on at the Brookhaven Relativistic Heavy Ion Collider (RHIC), where they're smashing gold nuclei together at greater than 99.99999% the speed of light, in an attempt to create "quark gluon plasmas" ... that *just* *might* have less than an 0.00001% chance of creating a new big bang and destroying the universe.
Why am I not afraid? Becuase of ultra-high-energy cosmic rays, which regularly bombard the Earth's atmosphere with far, far more energy than we can make in our measly human particle accelerators.
So if the universe could end over something like this, it'd be ending all the time.
I don't have such a cavalier attitude towards the attempt to mix bird flu and human flu, in which "what health officials fear most about bird flu ... acquiring genes from a human flu virus ... is getting under way in a high level biosecurity laboratory."
Why does this worry me, when news that the Brookhaven collider might have been spawning black holes does not? It's simple. The mixing of genes between pathogens, while it does happen in nature, is rare and hasn't yet happened between avian and human influenza. So while I don't worry about something dangerous that MIGHT happen relating to a phenomena that DOES happen all the time, as at Brookhaven, I do get worried about trying to create something that COULD but DOESN'T happen precisely because you are worried that something bad MIGHT happen. If you're really that worried that avian flu and human flu might combine into a world-ravaging pandemic ... why are you trying to make it happen?!?
Now, in reality, I'm not saying stop work. By all means --- if they can figure out how avian and human influenza tick and develop a proactive vaccine, more power to you. But ... haven't these guys seen or read The Stand?
I'm not joking around here. Tom Clancy's Debt of Honor included (spoiler alert) terrorists crashing an airliner into the Capitol building in Washington, long before 9/11. Ignore the crazy conspiracy theories --- the point is that what happened at 9/11 should not have come as a great surprise to anyone. We should have been prepared for this from the getgo, because a known failure mode had been examined and exposed. Or maybe the larger point is that speculative fiction enables us to wargame possible scenarios and prepare ourselves to deal with them --- or avoid them.
True, you can't live your life based on what you read in novels. For example, David Brin's Earth presages the kind of work going on at Brookhaven ... illustrating graphically the disasters that might happen if a man-made black hole fell into the Earth's core. I'm not worried about this, because I know that at the size range of the black holes we might create, quantum effects will dominate over gravitational effects and the black holes would evaporate before it had a chance to eat anything. This mental model of physics is helps me understand what's going on at Brookhaven and gives me the confidence to give them a walk on what they're trying to do. SO while fiction can spark our imaginations, it can't replace thought --- we need to develop mental models of the situations at hand and apply them rationally.
This is precisely the same kind of mental model that I use for my work in artificial intelligence --- knowing what artificial intelligence is and how it works enables me to realistically gauge the risks --- creating a superintelligent machine, for example (a very low probability event!) --- and to take steps to mitigate that risk appropriately --- namely, don't put any machinery that can take human life in the hands of your intelligent machines (much less Mankind's entire collection of nuclear weapons, as people seem to want to do in the movies)!
This is not because artificial intelligence researchers fear a superintelligent machine taking over a la Colossus the Forbin Project or the Terminator. Quite frankly, we'd be tickled pink if a superintelligent machine showed up, but we're not holding our breath waiting. No, the reason you don't put dangerous machinery in the hands of intelligent machines is because of the far more realistic --- and realized --- fear that so-called "intelligent" machine will make a stupid mistake because of bad programming and hurt somebody because, in reality, the machines we can make are just not smart enough to take over the world --- they're not even smart enough to even realize what they are doing.
So. Perhaps the workers at the CDC combining the avian and human influenza strains have a similar mental model of epidemiology which enables them assess the risks realistically and structure their work appropriately. Let's hope so.
Otherwise...
We're very concerned.
-the Centaur
Why am I not afraid? Becuase of ultra-high-energy cosmic rays, which regularly bombard the Earth's atmosphere with far, far more energy than we can make in our measly human particle accelerators.
So if the universe could end over something like this, it'd be ending all the time.
I don't have such a cavalier attitude towards the attempt to mix bird flu and human flu, in which "what health officials fear most about bird flu ... acquiring genes from a human flu virus ... is getting under way in a high level biosecurity laboratory."
Why does this worry me, when news that the Brookhaven collider might have been spawning black holes does not? It's simple. The mixing of genes between pathogens, while it does happen in nature, is rare and hasn't yet happened between avian and human influenza. So while I don't worry about something dangerous that MIGHT happen relating to a phenomena that DOES happen all the time, as at Brookhaven, I do get worried about trying to create something that COULD but DOESN'T happen precisely because you are worried that something bad MIGHT happen. If you're really that worried that avian flu and human flu might combine into a world-ravaging pandemic ... why are you trying to make it happen?!?
Now, in reality, I'm not saying stop work. By all means --- if they can figure out how avian and human influenza tick and develop a proactive vaccine, more power to you. But ... haven't these guys seen or read The Stand?
I'm not joking around here. Tom Clancy's Debt of Honor included (spoiler alert) terrorists crashing an airliner into the Capitol building in Washington, long before 9/11. Ignore the crazy conspiracy theories --- the point is that what happened at 9/11 should not have come as a great surprise to anyone. We should have been prepared for this from the getgo, because a known failure mode had been examined and exposed. Or maybe the larger point is that speculative fiction enables us to wargame possible scenarios and prepare ourselves to deal with them --- or avoid them.
True, you can't live your life based on what you read in novels. For example, David Brin's Earth presages the kind of work going on at Brookhaven ... illustrating graphically the disasters that might happen if a man-made black hole fell into the Earth's core. I'm not worried about this, because I know that at the size range of the black holes we might create, quantum effects will dominate over gravitational effects and the black holes would evaporate before it had a chance to eat anything. This mental model of physics is helps me understand what's going on at Brookhaven and gives me the confidence to give them a walk on what they're trying to do. SO while fiction can spark our imaginations, it can't replace thought --- we need to develop mental models of the situations at hand and apply them rationally.
This is precisely the same kind of mental model that I use for my work in artificial intelligence --- knowing what artificial intelligence is and how it works enables me to realistically gauge the risks --- creating a superintelligent machine, for example (a very low probability event!) --- and to take steps to mitigate that risk appropriately --- namely, don't put any machinery that can take human life in the hands of your intelligent machines (much less Mankind's entire collection of nuclear weapons, as people seem to want to do in the movies)!
This is not because artificial intelligence researchers fear a superintelligent machine taking over a la Colossus the Forbin Project or the Terminator. Quite frankly, we'd be tickled pink if a superintelligent machine showed up, but we're not holding our breath waiting. No, the reason you don't put dangerous machinery in the hands of intelligent machines is because of the far more realistic --- and realized --- fear that so-called "intelligent" machine will make a stupid mistake because of bad programming and hurt somebody because, in reality, the machines we can make are just not smart enough to take over the world --- they're not even smart enough to even realize what they are doing.
So. Perhaps the workers at the CDC combining the avian and human influenza strains have a similar mental model of epidemiology which enables them assess the risks realistically and structure their work appropriately. Let's hope so.
Otherwise...
We're very concerned.
-the Centaur
// posted by Anthony Francis @ 7:33 AM Permalink
Comments: