Tuesday, December 9, 2008

Uh Oh Nano, Part 2.

(The following essay was written 2 years ago. It is being posted now in connection with research released this week in Nature Nanotechnology showing that people's cultural self-identification also contributes to their perceptions of risk, in this case the risks that may arise from nanotechnology. For a summary of that research and a citation to the original journal article, see http://www.sciencedaily.com/releases/2008/12/081207133749.htm. For an earlier post on nanotechnology, see the second post at http://onrisk.blogspot.com/search?updated-min=2007-01-01T00%3A00%3A00-05%3A00&updated-max=2008-01-01T00%3A00%3A00-05%3A00&max-results=

Nanotechnology, the capacity to manipulate materials at atomic and molecular scales, holds promise bounded only by the human imagination. But if this promise is to be fully realized in ways that respect potential harms to human and environmental health and safety, serious and ongoing consideration needs to be given to the way the public will react to nanotechnology and all its specific applications. For despite it’s benefits, if public apprehension builds, the potential of nanotechnology could be severely limited. The findings of the field of research known as risk perception offer valuable insights into what that public reaction might be. Understanding those potential reactions will allow proponents of nanotechnology to respect public concerns and address them through more effective risk communication, and advance the prospects of the entire field.
The Greek Stoic philosopher Epictetus observed “People are disturbed, not by things, but by the view they take of them.” Indeed, researchers including Paul Slovic, Baruch Fischoff, Sarah Lichtenstein, and others, have found that risks seem to have shared characteristics which, quite apart from the scientific facts and statistical probabilities, play a key role in making us more or less afraid. These affective/emotional characteristics are a fundamental part of how we frame our worries. They essentially form the subconscious backdrop by which we “decide” what to be afraid of and how afraid to be.
Perhaps the most important of these characteristics is the matter of trust. As leaders in the field of risk perception and risk communication have found, the more we trust – the less we fear. And the less we trust – the more afraid we are likely to be. Trust will almost certainly play a key role in the acceptance of, or resistance to, nanotechnologies.
Trust is determined by many factors. It is determined in part by who does the communicating. The facts being presented could be the same, but the trustworthiness of the communicator will help determine how worried the audience will feel. For example, people who learn about nanotechnology from the chemical industry, which is less trusted according to many polls, are more likely to worry than people who learn about it from health care professionals like doctors or nurses, more trusted professions.
Trust is also established through honesty. BSE affords a good example. In Japan, after the first indigenous infected cow was found, the government promised there would be no more. A second cow was found just days later. The government then said they had created a ban on feeding ruminant protein back to healthy cows – which is how the disease spreads – only to have the press learn and report that the “ban” was only voluntary. The press also reported that the government had kept secret an EU report rating Japan at high risk for BSE. These less-than-honest statements by the government badly damaged trust and fueled much greater fear in Japan than in Germany, where, within about a month of the discovery of the first indigenous infected cow, two cabinet ministers were sacked and changes were proposed to make agricultural practices more natural and less mechanical. Beef sales rebounded in Germany quickly, unlike Japan, partly because of the different degrees of honesty on the part of the government.
Trust also grows from an organization’s actions. Again BSE affords an example. In Canada and the U.S., after the first infected cows were found, the governments were able to point out that they had long ago instituted a feed ban and other restrictions to keep the risk low. As much as citizens might not have trusted the government in general, in this matter the responsible agencies had demonstrated their competence in keeping the risk low. This demonstration of competence probably played a role in the relatively minimal impact on consumer beef sales experienced within each country.
Trust is also established when an organization respects the reality of the public’s fears, even though there may be no scientific basis for those fears. In the U.S., authorities withdrew muscle meat from the market that had come from the slaughterhouse that processed the BSE-infected cow. This despite the scientific consensus that muscle meat is not a vector for BSE. The U.S. Department of Agriculture said it was acting “out of an abundance of caution…”. In other words, they were acknowledging and responding to the reality of the public’s fear and doing something that did not reduce the physical risk, but reduced public apprehension. The implicit message in such actions is that the agency was not being defensive, but was being responsive to public concern. That kind of action encourages people to trust such an agency, more than when the message is “There are no scientific reasons for your fears. The facts as we see them says there is no risk. So we are not going to act.” This potentially damaging message has already been heard from a small number of scientists in the area of nanotechnology.
Trust is also established by sharing control. In the case of nanotechnologies, this could include shared control over the writing of public health and environmental risk regulations, or shared control over the development of societal and ethical guidelines. The more that people feel they have some control over their own health and future, the less afraid they will be. The same risk will evoke more worry if people feel they have less control.
Trust is also built by openness. In the development of nanotechnologies, this should include dialogue with various stakeholders, a fully open exchange of scientific data, open government regulatory development, and open discussion of societal and ethical issues, among other areas. The more that people feel they are being deceived, lied to, or manipulated, the more afraid of a risk they are likely to be. Openness reassures them that they can know what they need to know to keep themselves safe. An open process is inherently trust-building.
Trust will be difficult to establish as nanotechnologies develop, because the driving forces behind such development will be principally commercial, industrial, corporate, and government, and the politically and profit-driven sectors of society are, de facto, perceived to be out for their own good more than they are out to serve the common good. So special attention and effort must be paid to establishing trust in everything that a government, a business, or a scientist does while working on commercial nanotechnological research, development or application.
But there are other risk perception characteristics that could bear on public acceptance of, or resistance to, nanotechnologies.
People tend to be more afraid of risks that are human-made than risks that are natural. Nano is almost certainly going to perceived as a human-made technology.
We tend to worry more about risks like nanotechnology that are hard to comprehend because they are scientifically complex, invisible, and not yet completely studied and understood. This could well invoke calls for a stringent application of the Precautionary Principle, and proponents of nanotechnology would be well advised to give serious consideration to such calls until a reasonable amount of safety data is developed.
We tend to worry more about risks that are imposed on us than risks we knowingly choose to take. Nanotechnologies will provide finished materials in some cases, which could appear in the label of a product to alert the consumer and give them choice. But in many cases, nano substances will serve as intermediates or raw materials or catalysts, substances which can not be labeled, and which therefore could evoke concern because people are going to be exposed to them without any choice.
We worry more about risks that are new than the same risk after we’ve lived with it for a while. While carbon black and some nano materials have been around for a while, many nano materials and products are new, with different behavioral characteristics than anything we’ve ever known. And of course the precise ability to manipulate things on a nano scale is new. This too could feed greater public apprehension about this technology.
And we tend to worry more about risks from which we personally get less benefit, and vice versa. For some nanotechnologies, for some people, the personal benefits may well outweigh the risks. But when they don’t – or when the benefits principally accrue to someone else - fear and resistance could rise.
It is important to respect the reality and the fundamental roots of these perception factors. They can not be manipulated away or circumvented with a clever press release, website, or a few open public meetings and dialogue. Human biology has found that the brain is constructed in such a way that external information is sent to the subcortical organs that generate a fear response before that information gets to the part of the brain that reasons and thinks “rationally”. In short, we fear first and think second. No press release can undo that biology. It is quite likely that, because of some of the characteristics of nanotechnology listed above (human-made, hard to understand, imposed, new, lack of trust in industry), that if the first way people hear about it involves some hint of threat or negativity, that the initial reaction many people have will be worry and concern.
Moreover, the brain is constructed such that circuits stimulating a ‘fear’ response are more numerous than those bringing rationality and reason into the cognitive process of risk perception. In short, not only do we fear first and think second, we fear more, and think less.
Again, this suggests the likelihood that first impressions many people will have of nanotechnology will be predominated by caution and concern.
Fortunately, both the biology and psychology of risk perception have been fairly well characterized. Insights from those fields can guide the design of research to find out how people are likely to react to nanotechnology as it becomes more common, is introduced into their lives, and as it gets more and more attention in the press, a trend already beginning in many places. Research that understands how people are likely to react is the first step toward designing risk management strategies, including risk communication, to address public concerns.
It is imperative that such research be done soon, so it can be used to develop risk management and risk communication strategies that will maximize public understanding of nanotechnology, and public participation in the process of its development and implementation. With these steps, the potentials of this remarkable field can be more fully realized while respecting public concerns and insuring public and environmental health and safety.

Tuesday, April 22, 2008

WORLD DESTROYED!!!

Scientists from around the world are eagerly awaiting the first experiments this summer at the Large Hadron Collider in Switzerland, which will smash subatomic particles together to try to replicate conditions in the universe a trillionth of a second after the Big Bang. Some say the experiments could create a black hole and destroy the earth. Scientists dismiss those fears as irrational. It's a classic collision of the two ways we humans try to sort out the risks we face.



This just in, from Geneva, Switzerland.
The world has been destroyed, consumed in the infinite gravity of a black hole triggered by an experiment at the Large Hadron Collider. All life on earth was extinguished and the earth itself was crushed down to the size of an atom.
However, scientists running the experiment say that, as predicted by the laws of quantum physics, our former world and all life as it existed were instantly replaced with identical copies. “What's the big deal? Nobody even noticed,” said Dr. I. M. Smart, a leader of the science team conducting the experiment. He added "It's just as we predicted. People have to start trusting scientists and stop worrying when we tell them they're safe."
Scientists say the experiment identified a new condition of matter, which they have named the Significant Major Unknown Gyration, or SMUG. “We’re very excited,” Dr. Smart said. “Discovery of Smugness has taught us important new things about the creation of the universe, even if it did require the fleeting destruction of the world. That’s just how science progresses.”
The experiment survived several lawsuits seeking to avoid the destruction that occurred this morning. The plaintiff in those suits, Walter Whiner, could not be reached for comment on the outcome of the experiment. His wife said he disappeared at 4:13 a.m., precisely the moment the experiment began.
Police report that a number of other people are missing. Officials in Cincinnati, Ohio say the entire staff of The Creationism Museum disappeared during a conference entitled “Darwin was a Communist”. Police in Washington, D.C. are searching for Bette B. Scared, founder of “Vaccines Cause Autism”. Australian police say they are searching for Bea Afraid, author of “The Only Safe Risk is ZERO Risk” and a well-known opponent of genetically modified food.
Dr. Smart denies any connection between the disappearances and the momentary destruction of the Earth caused by his experiment. “Under the laws of super strong theory we predict with a 99.99% confidence interval that they should look for these people in Dimension X,” he said. Smart added "We call it The Irrational Dimension. Which isn't so different from where we think they've been living all along.”

Following the momentary destruction of the world, attorneys rushed to file several class action lawsuits. The first was entered at court just 45 seconds after the destruction event by Attorney Sue Everyone of the law firm of Screwem, Ligh, and Profit, who said “This is the most egregious case of arrogant scientists run amok in the history of mankind. It doesn't matter that we may have unlocked the mystery of how the universe was created. My clients, who include anyone on the planet who was alive at 4:13 this morning, were harmed when a nanosecond of their lives was taken away from them." Everyone is claiming infinite punitive damages.

Officials at the Large Hadron Collider say their work will continue. Critics have already filed legal action to stop an upcoming experiment which they say could set the Earth on fire. Scientists say their work is safe. They call the critics irrational. The critics say the scientists are arrogant and aren't taking the risk seriously.
The court hearing on the upcoming experiment will be held next month on Friday the 13th.

Friday, April 11, 2008

The Conflict Between Nuclear Fears and Nuclear Facts

Mark Twain said "I am an old man and have known a great many troubles, but most of them never happened."

We worry about a lot of things in this threatening world, and as Twain suggested, sometimes our worries are based more on our perception of the facts than on the facts themselves. So what's a government to do when people think they have been harmed by something, and they want the government to compensate them, only the evidence says they're wrong.

Well the Japanese have just answered that question with a resounding "Pay them anyway."

This is about the survivors of the atomic bombs dropped on Hiroshima and Nagasaki. They are known in Japan as the HIBAKUSHA, people who lived within about two and a quarter miles of ground zero or visited those areas in the days after the blasts. There are roughly 250,000 of them still alive. Their average age is 64.

HIBAKUSHA rightly get all sorts of special government benefits. Those who suffer from five diseases connected with radiation are also eligible for special medical benefits. In 2001 the government established science-based standards to determine who qualifies, since many HIBAKUSHA, who lived further from the center of the explosions, received practically no radiation dose at all. That science-based approach basically said that the further away from ground zero you lived, the less radiation you were exposed to, and the less likely it is that radiation caused your illness. (Remember, lots of elderly people get cancer, like skin cancer or prostate cancer, and radiation has nothing to do with it.) Under that standard 99 out of every hundred atomic bomb survivors who applied for special medical benefits were turned down.

300 of them sued the Japanese government, and those lawsuits got a lot of attention in the press. Political pressure built on the government. Then-Prime Minister Shinzo Abe was already in political trouble, so he promised a new system.

That new system was just announced and, basically, it says the faqcts don't matter as much as people's concerns. Now any HIBAKUSHA who is sick gets paid, no matter how far they lived from ground zero, no matter if they received any radiation at all. As one member of the government panel that wrote the new rules said "Fear is particularly high about radiation. It's more important to support the HIBAKUSHA regardless of the lack of scientific evidence."

That FEELS good, FEELS fair. But think about it. By that standard governments should be paying people who are worried about electric power lines, or artificial sweeteners, or silicone in their breast implants, or autism from vaccines, or brain tumors from cell phones…or any number of other risks which many of us fear, fears not supported by the facts.

Worrying too much can cause stress, and stress hurts our health. So unlike many of the things we fear, the risk from fear itself is real, and needs to be respected in government policy. But the Japanese policy goes so much further than that. They've basically said that emotions trump science, and in a world in which so many of us are worried about so many things, a policy like that could turn out to be a truly troubling thing.

Tuesday, April 8, 2008

This is Your Brain on Fear

So I was walking my dogs in the woods the other day and it happened again. There was this long skinny curving line on the ground, and my rational brain said “That’s a root” and my animal brain screamed “SNAKE! SNAKE!”, and the animal brain won. I froze.

This happens to me all the time, which is dumb for three reasons. First, I KNOW it’s not a snake. Second, this is what I study and teach…the way we human animals perceive risk and how to communicate about risk better, so I should be able to overcome this apparent irrationality. And third, every time it happens I tell myself not to let it happen again…but it does. By the way, this DOESN’T happen to my dogs.

The good news is we understand pretty well how this works…how whenever we encounter something that could be hazardous, that information goes first to the part of the brain that sets off a fight or flight response just in case, and THEN it goes to the parts of the brain that can give it a little thought, and send back the message “You did it again you idiot.” By which time, I’ve already frozen.

It doesn’t matter what the potential threat is. It doesn’t matter whether we see it, smell it, hear it, touch it, taste it, or even if it’s just information assembled in our brain…a thought…or a memory. The same thing happens. The information goes to where we fear first, and to where we think, second.

And then, in the ensuing battle between rationality and gut instinct…guess what? Instinct usually wins, or at least it has the upper hand, again because of wiring in the brain.

So there I am on the trail. My dogs look at me to try to figure out why we’re stopping. I ignore this innocent interrogation and move on. And I am reminded once again that talking to people about risk a messy affair. Most risk communication just tries to find really clear ways to explain the facts, to educate. But if just learning the facts was enough, I wouldn’t be freezing when I saw a root on the ground in the woods. If just the facts were enough, we probably wouldn’t be as afraid as we all are about a lot of things…like terrorism (the fear of which helped launch a war) or nuclear power, or industrial chemicals…and we’d probably be more afraid of the things that are much more likely to kill us, like heart disease (it kills 2200 Americans every day), and stroke, and motor vehicle crashes and other accidents.

I find that if I want to help my friends make informed, healthy choices about the risks they face…well, yes, it might help to offer what few facts I can…but I also have to respect their fears, and not just say ”Here are the Facts. Calm Down”. I have to respectfully help them know that risk perception is a combination of facts AND feelings, and though both are valid, sometimes the feelings can lead to behaviors that actually increase the risk. Just knowing that challenges me to think about risks more thoroughly.

Unless, of course, it’s another root in the shadows at my feet. Maybe I should just let my dogs walk ahead of me.

Thursday, March 13, 2008

It won't happen to ME!

My wife and I were watching the news the other day about the former governor of New York and she asked me "Why do apparently SMART people take such obviously STUPID risks?"

Well, former Governor Spitzer took his chances for many personal reasons, but he was at least partly a victim of something called Optimism Bias. That's the fancy name for "Yeah I know it's a risk but IT WON'T HAPPEN TO ME." You've probably used that one yourself from time to time. It's what people say to themselves when they smoke. Or they don't wear their seat belts, or they send text messages while they're driving, or they have that rich fattening dessert when they're already overweight. "I won't get caught. I won't get lung cancer. I won't have an accident. I won't get heart disease." IT WON'T HAPPEN TO ME!

Optimism Bias is not just the tool of potential victims though. The flip side of "IT WON'T HAPPEN TO ME" is "IT WILL HAPPEN TO ME." That's the kind subconsciously applied by lottery players, or hedge fund managers, or bank or mortgage executives, or any of us, when we gamble with money hoping to make more. IT WILL HAPPEN TO ME, we tell ourselves, as we fork over five bucks on some scratch tickets…or lend money to people who are not likely to be able to pay it back, or when we buy things on margin…paying only 10% and the bank or broker pays the rest…only if the investment goes bad the bank or the broker calls the loan and we have to come up with the other 90%. Sounds a lot like the credit crisis, doesn't it? There's a reason why the so called credit bubble existed in the first place. The hot air of Optimism Bias helped inflate all that risky financial gambling to begin with.

We all use Optimism Bias, all the time. Optimism Bias is how we tell ourselves it's okay to have that extra beer before driving home from the bar, or to gamble on that chancy mortgage to buy our first home… or to cheat on relationships.

Is this rational? No. But then, perfect rationality is only a myth. We think that because we CAN think, that's how we OUGHT to decide. But our feelings and instincts and ancient needs are also big players in the choices we make and the chances we take. We all have hopes and desires, and Optimism Bias is one of the tools that helps us fulfill them. We ALL use it, not just the apparently smart people we hear about in the news when things don't turn out as optimistically as they had hoped.

Friday, February 8, 2008

CLIMATE CHANGE - What, ME worry?

Anyone interested in climate change paid close attention to the December meetings in Bali, where the world’s leaders worked on how to deal with this unprecedented global threat. The meetings took place under the challenge of the head of the Intergovernmental Panel on Climate Change, Dr. Rajendra Pachauri, who said “What we do in the next two to three years will determine our future. This is the defining moment.”

But as policy makers focused their attention on the Bali meetings, the rest of us were paying attention to the same things we always do; our health, our jobs, our personal budget, our spouses or love lives, the daily commute, etc. The policy makers in Bali considered climate change from their usual perspective, as if looking down on the earth from high above. But we don’t live up there. We live down here. We don’t live on a planet. We live in our homes and our neighborhoods. We don’t live in the climate of the earth. We live in the weather of our daily lives.

That chasm in perspectives, between the global view and the local, could be the biggest obstacle to meeting Dr. Pachauri’s challenge. The things we need to do at the system level will have impacts at the personal. But we may not be willing to accept those impacts, because most of us don’t see how climate change actually threatens us. The wisest policies agreed to in Bali and subsequently will come to little without public support. The leaders dealing with climate change at ’defining moment’ must devise solutions that will work globally, and appeal locally.

Ask yourself this; Over the next 20 years, can you name one specific way that climate change will have a serious, negative, direct impact on you or your family? Most of us can’t answer that question. You probably know that climate change will have all sorts of serious negative impacts, but not how it’s going to impact you directly. A survey of public perceptions of climate change by Anthony Leiserowitz, “Climate Change Risk Perception and Policy Preferences: The Role of Affect, Imagery, and Values” found that while an overwhelming majority of respondents believed that climate change is real and that we should do something about it, only 12% were most concerned about the effects of climate change on them. 50% were most concerned about effects on the U.S. as a whole. 18% were most worried about effects on nonhuman nature. 10% weren’t worried about the effects of climate change at all. Small wonder, then, that the study found the following support in the United States for various ways to deal with climate change.

US Reduce Emissions - 90%
Kyoto Protocol - 88%
Increase CAFE standards - 79%
Regulate CO2 - 77%
Business tax - 31%
Gas tax - 17%


People are ready to support ideas. Fewer are ready to support spending what it will take to make those ideas reality.

Or consider a Globescan survey of 22,000 people in 21 nations released by the BBC in November. 83% said personal changes in lifestyle are needed to help combat climate change. But when asked if they themselves would be willing to make such changes, the number goes down. It’s still large, 70%, but note that it goes down. Fewer still, 61%, agree with the idea of paying higher energy costs. Ask them if they’d be willing to pay higher taxes to combat the problem and it effectively becomes a toss up, 50% saying yes, 44% saying no.

The trend is similar in most surveys. An overwhelming majority of people in the U.S. and around the world believe that climate change is a real threat, but when you ask people what should be done about it, as the cost to them goes up, their readiness to act goes down. That bodes poorly for the prospect of public support for the changes we need to make to address the problem.

Research into the ways humans perceive risk has found that, not surprisingly, we worry more about things that could happen to us than about things which threaten others. The survey evidence makes pretty clear that the “ME” factor is not much at work in most people’s perceptions of climate change. Which makes it unduly hopeful to expect people to give up the benefits of maintaining their current lifestyles? Why would they agree to pay higher energy bills, or gasoline taxes, or more for goods and services whose prices rise because of CO2 trading? The benefits of the comfortable status quo outweigh the minimal risks that we think climate change poses to us personally.

The science and policy communities tend to see the issue through their own professional lenses of fact and science and reason. The science of human behavior, particularly the psychology of risk perception, robustly shows that we use two systems to make judgments about risk; reason and affect, facts and feelings. It is simply naïve to disregard this inescapable truth and presume that reason and intellect alone will carry the day. That's just not how the human animal behaves. Even as potentially catastrophic as climate change might be, if people don't sense climate change as a direct personal threat, reason alone won't convince them that the costs of action are worth it.

There are still too few scientists and policy leaders describing the potential impacts of climate change on a local level. This is an admittedly dicey business because it’s hard to know specifically what changing the climate of the planet is going to do to Denver or Delhi or Dusseldorf. But there is plenty of scientific evidence of the harm climate change might do at the local level. These potential local risks need to be emphasized, in the concrete terms that will give people more of an idea of what climate change might do to them.

The costs of policies to deal with this global challenge also have to be presented in local terms. What will carbon sequestration or CO2 trading do to the prices of the goods and services we buy? What will requiring renewable energy sources do to our electricity bills? How might energy efficiency requirements cost us money, or perhaps save us money?


Many scientists from a wide range of fields have built the evidentiary case for climate change, and identified a range of solutions. But far too little attention has been given to the science of risk perception, and the tools of risk communication, to build a base of support for those solutions. Achim Steiner, head of the United Nations Environment Program, said that the ominous IPCC report released last fall sends a message; “What we need is a new ethic in which every person changes lifestyle, attitude and behavior.” A wonderful goal, but unlikely to happen unless individuals are more worried about how climate change might affect them directly. As the leaders of the world move on in the wake of Bali, they need to remember the real people in the local neighborhoods of our global village who will have a lot to say about whether the policies they choose will succeed.

Tuesday, January 15, 2008

Cloned Food - "Waiter, my burger tastes like test tube."

Choices, Choices.

Most risks usually involve tradeoffs. We like dessert and fried food so we take the risk of being overweight. We enjoy that nice healthy tan so we're willing to run the risk of skin cancer. We lead busy lives, so we use our cell phones when we drive and dismiss the danger of distracted driving.

But those are all matters of choice. Now comes a new risk-risk tradeoff that could be in our stores soon, over which we might not have any choice at all. We may soon have milk or meat from cattle, pigs, or goats which are clones, copies of the original animal but conceived in the lab, not the uterus. The farmer just has to identify the animals with the traits that produce more meat or richer milk, and copy them, from the DNA up. Those breed stock animals will be used to generate offspring with the superior traits. So what’ll it be, diner? Beef that started out the old fashioned way, or beef from descendants of a cow that never knew its mother...never HAD a mother?

The FDA says this is safe. They studied cloned animals for 6 years before recently declaring that meat and milk from cow A is equivalent to meat and milk from Cow B. But the question of safety isn't just one of lab analysis. Safety is a matter of how we feel. And a Pew Institute study found that 43% of Americans feel that cloned food is unsafe (64% aren't sure). Said one opponent of cloned animal products to the FDA “I would rather pay more for natural processed food (than) have what was cooked up in some science lab.” So show me the company that wants to be the first to bring such products to market.

The problem is, the FDA has not required food produced this way to carry some sort of label. This is just what industry wanted. But it's a big mistake.

Remember, we're less afraid of risks when we have some choice. A risk we take on our own feels less frightening that one which is imposed. No label = No choice. The possibility that there may be something different about one gallon of milk compared to the next, but we won’t know it because the difference won’t be on the label, makes many of us leery. The FDA assures us that cloned product is equivalent to what we eat now. But the Pew survey found that 64% of us aren’t sure, so we want to be able to make that choice ourselves. As the Consumer Federation of America complained, "The products will not be labeled as such and American consumers will have no way to avoid consuming them."

A label to give us choice makes sense, for the consumer and the for the food industry. By giving people choice - good for consumers - labeling will reduce opposition to cloned food products, probably enough to allow them to come to market - good for the food industry. Want proof that this works? Concern in Europe about genetically modified foods went down (not away, but down) after labels appeared telling consumers which foods contained GM products. In the U.S., producers are loathe to use irradiation to sanitize food - also safe and legal - for fear that consumers won’t buy something labeled as irradiated. Yet in test markets where such products have been sold, bearing a label that identifies the food as having been treated with radiation to kill germs, consumers buy these products, in part, they say, because the label lets them choose for themselves.

Will labeling give consumers all the facts they need to make a fully rational information-based choice? Of course not. The GM food label doesn’t. The irradiated food label doesn’t. A label that says something general like “Contains products from cloned animals” is not exactly full disclosure. But it’s enough. Enough to say to the consuming public that our government acknowledges our concerns and respects that we should have the final say about what we eat.

Notwithstanding the science that it’s safe, this stuff is scary to some. The FDA needs to look beyond the science of animal biology to the psychological study of risk perception, which explains why our very real fears often don't match what scientists say are the facts. Requiring a label on food from cloned animals or their offspring could go a long way toward allaying consumer concerns, and allow a food technology that promises more, healthier, and cheaper food to move forward, without being forced down anyone’s throat.