• vice Company, a personal-data seller, told the conference audience how he had been denied a mortgage because of a misleading credit report. Insurers, who subscribe to a centralized medical-infor- mation data base, have been accused of denying coverage to people who have had themselves tested for HIV, even if they test negative, on the theory that be- ing worried enough to take the test im- plies risky behavior. These kinds of potential abuses are becoming more important as lawmakers (and private companies) put personal data to uses for which it was never in- tended. Federal law, for example, now supports using motor vehicle informa- tion to track parents whose child-sup- port payments are late; a single data- base entry can cause computers to issue a warrant for the alleged deadbeatÕs car to be seized. In a striking mismatch of crime and punishment, Massachusetts legislators recently proposed blocking li- cense renewal for citizens with unpaid library Þnes. ÒWe told them they were crazy,Ó Lewis notes. If automotive Þles, containing only name, address, vehicle identiÞcation number and a few other bits of information, can spur such con- troversy, what of medical information? Clinton administration policymakers re- gard automated medical records as a crucial ingredient in cutting health care costsÑRene C. KozloÝ, a project oÛcer at Kunitz and Associates, a health-man- agement information firm, anticipates a Òconception to death recordÓ stored on either smart cards or a central data base. Yet there are minimal controls over the Þve or six dozen people who may handle those records as a result of a visit to a hospital or clinic. Given the problems that have been caused by dis- closure of medical records kept on pa- per, opening such information to mas- sive, uncontrolled computer searches seems unwise, says Janlori Goldman of the American Civil Liberties Union. Privacy advocates have been working for nearly 20 years for a so-called Fair Information Practices Act that would give the subjects of public and private data bases power over how personal in- formation on them is used. Although pro-privacy forces have thus far been unsuccessful in the U.S., they have had more luck in Europe. The British enact- ed ÒData ProtectionÓ rules in 1984, and a privacy directive for the European Community is in draft form. British law requires businesses that keep data bases to register them with the Data Protection Registrar, to ask for peopleÕs consent before gathering infor- mation about them and not to use those data for a purpose diÝerent from the one for which they were collected. ÒIn- formation about others is held in trustÓ rather than being owned by data-base compilers, says Rosemary Jay, legal ad- viser for the registrar. Jay has brought court challenges against credit-reporting agencies; she has also had to deal with direct marketers seeking access to the registrarÕs list of data bases. ÒCheeky,Ó she comments. ÑPaul Wallich 32 SCIENTIFIC AMERICAN May 1993 What does a computer do whenit starts to die? The HAL 9000in the Þlm 2001: A Space Odys- sey burst into a rendition of ÒA Bicycle Built for Two,Ó a song it had been taught early in life. The memorable scene may not be too far oÝ the mark. ThatÕs what one researcher found out when he be- gan to ÒkillÓ a type of computer program known as an artiÞcial neural network. As the network approached death, it be- gan to output not gibberish but informa- tion it had previously learnedÑits sili- con life ßashed before its eyes, so to speak. The analogy to a so-called near-death experience is irresistible because the cre- ators of artiÞcial neural networks de- sign them to mimic the structure and function of the biological brain. A neu- ral network relies on ÒunitsÓ to serve as the cell body of a neuron and ÒlinksÓ be- tween the units to act as the intercon- necting dendrites and axons. The units are typically organized into several lay- ers. A consequence of such an architec- ture is that the network, like the brain, can learn. In a real brain, learning is thought to occur because of changes in the strength of synaptic connections among neurons. Similarly, a neural net- work alters the strength of the links (spe- ciÞcally, the weighting between units) to produce the correct output. Typical- ly a programmer teaches a network by repeatedly presenting training patterns to it [see ÒHow Neural Networks Learn from Experience,Ó by Geoffrey E. Hin- ton; SCIENTIFIC AMERICAN, September 1992]. Properly trained neural networks can handle diverse tasks, from compressing data to modeling dyslexia. Stephen L. Thaler, a physicist for McDonnell Doug- las, began to explore neural networks a year ago as a way to optimize the pro- cess control of diamond crystal growth. But curiosity led him to start annihilat- ing neural nets as an evening avocation. ÒDaisy, DaisyÓ Do computers have near-death experiences? Copyright 1993 Scientific American, Inc.
  • He devised a program that would grad- ually destroy the net by randomly sev- ering the links between units. ÒThe method was meant to emulate the de- polarization of the synapses in biologi- cal systems,Ó Thaler says. After each suc- cessive pass, he examined the output. When about 10 to 60 percent of the connections were destroyed, the net spat out nonsense. But when closer to 90 per- cent of the connections were destroyed, the output began to settle on distinct values. In the case of ThalerÕs eight-unit network, created to model the Òexclu- sive orÓ logic function, much of what was produced was the trained output states 0 and 1. The net sometimes gen- erated what Thaler terms ÒwhimsicalÓ states, that is, values that neither were trained into the net nor would appear in a healthy net. In contrast, untrained networks produced only random num- bers as they died. That an expiring net would produce meaningful gasps is not entirely far- fetched. ÒIt makes sense in terms of a network that has made some stable pat- terns,Ó says David C. Plaut, a psycholo- gist and computer scientist at Carnegie Mellon University who uses artiÞcial neu- ral nets to model brain damage. Indeed, Thaler has a detailed explanation. In a fully trained, functioning network, all the weighted inputs to a particular unit are about the same in magnitude and oppo- site in sign. (In mathspeak, the weights follow a Gaussian distribution, or bell- shaped curve.) The odds are, then, that the sum of several weighted inputs to a unit equal zero. Hence, when the links are broken, the unit might not ÒfeelÓ the loss, because it may have been receiv- ing a total zero signal from them any- way. The few surviving links will often be suÛcient to generate reasonably co- herent output. But concluding that this artiÞcial ex- perience can be extrapolated to human brushes with death is a stretch. ÒNeural networks have got to be a rough approx- imation at best,Ó Plaut notes. The brain is far more sophisticated than neural nets. Furthermore, it is not entirely clear how collections of real neurons die. The death of a few neurons, for instance, may kill oÝ some nearby ones. And the method used to train the neural netsÑ an algorithm called back-propagationÑ is dissimilar to the way the brain learns. Still, the observations suggest that some of the near-death experiences com- monly reported might have a mathe- matical basis. ÒIt may not just be fancy biochemistry,Ó Thaler asserts. He is cur- rently working on more complex net- works, including one that will produce visual images. Any wagers for a light at the end of a long tunnel? ÑPhilip Yam SCIENTIFIC AMERICAN May 1993 33Copyright 1993 Scientific American, Inc.
Please download to view
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
...

"Daisy, Daisy"

by philip

on

Report

Category:

Documents

Download: 0

Comment: 0

212

views

Comments

Description

Download "Daisy, Daisy"

Transcript

  • vice Company, a personal-data seller, told the conference audience how he had been denied a mortgage because of a misleading credit report. Insurers, who subscribe to a centralized medical-infor- mation data base, have been accused of denying coverage to people who have had themselves tested for HIV, even if they test negative, on the theory that be- ing worried enough to take the test im- plies risky behavior. These kinds of potential abuses are becoming more important as lawmakers (and private companies) put personal data to uses for which it was never in- tended. Federal law, for example, now supports using motor vehicle informa- tion to track parents whose child-sup- port payments are late; a single data- base entry can cause computers to issue a warrant for the alleged deadbeatÕs car to be seized. In a striking mismatch of crime and punishment, Massachusetts legislators recently proposed blocking li- cense renewal for citizens with unpaid library Þnes. ÒWe told them they were crazy,Ó Lewis notes. If automotive Þles, containing only name, address, vehicle identiÞcation number and a few other bits of information, can spur such con- troversy, what of medical information? Clinton administration policymakers re- gard automated medical records as a crucial ingredient in cutting health care costsÑRene C. KozloÝ, a project oÛcer at Kunitz and Associates, a health-man- agement information firm, anticipates a Òconception to death recordÓ stored on either smart cards or a central data base. Yet there are minimal controls over the Þve or six dozen people who may handle those records as a result of a visit to a hospital or clinic. Given the problems that have been caused by dis- closure of medical records kept on pa- per, opening such information to mas- sive, uncontrolled computer searches seems unwise, says Janlori Goldman of the American Civil Liberties Union. Privacy advocates have been working for nearly 20 years for a so-called Fair Information Practices Act that would give the subjects of public and private data bases power over how personal in- formation on them is used. Although pro-privacy forces have thus far been unsuccessful in the U.S., they have had more luck in Europe. The British enact- ed ÒData ProtectionÓ rules in 1984, and a privacy directive for the European Community is in draft form. British law requires businesses that keep data bases to register them with the Data Protection Registrar, to ask for peopleÕs consent before gathering infor- mation about them and not to use those data for a purpose diÝerent from the one for which they were collected. ÒIn- formation about others is held in trustÓ rather than being owned by data-base compilers, says Rosemary Jay, legal ad- viser for the registrar. Jay has brought court challenges against credit-reporting agencies; she has also had to deal with direct marketers seeking access to the registrarÕs list of data bases. ÒCheeky,Ó she comments. ÑPaul Wallich 32 SCIENTIFIC AMERICAN May 1993 What does a computer do whenit starts to die? The HAL 9000in the Þlm 2001: A Space Odys- sey burst into a rendition of ÒA Bicycle Built for Two,Ó a song it had been taught early in life. The memorable scene may not be too far oÝ the mark. ThatÕs what one researcher found out when he be- gan to ÒkillÓ a type of computer program known as an artiÞcial neural network. As the network approached death, it be- gan to output not gibberish but informa- tion it had previously learnedÑits sili- con life ßashed before its eyes, so to speak. The analogy to a so-called near-death experience is irresistible because the cre- ators of artiÞcial neural networks de- sign them to mimic the structure and function of the biological brain. A neu- ral network relies on ÒunitsÓ to serve as the cell body of a neuron and ÒlinksÓ be- tween the units to act as the intercon- necting dendrites and axons. The units are typically organized into several lay- ers. A consequence of such an architec- ture is that the network, like the brain, can learn. In a real brain, learning is thought to occur because of changes in the strength of synaptic connections among neurons. Similarly, a neural net- work alters the strength of the links (spe- ciÞcally, the weighting between units) to produce the correct output. Typical- ly a programmer teaches a network by repeatedly presenting training patterns to it [see ÒHow Neural Networks Learn from Experience,Ó by Geoffrey E. Hin- ton; SCIENTIFIC AMERICAN, September 1992]. Properly trained neural networks can handle diverse tasks, from compressing data to modeling dyslexia. Stephen L. Thaler, a physicist for McDonnell Doug- las, began to explore neural networks a year ago as a way to optimize the pro- cess control of diamond crystal growth. But curiosity led him to start annihilat- ing neural nets as an evening avocation. ÒDaisy, DaisyÓ Do computers have near-death experiences? Copyright 1993 Scientific American, Inc.
  • He devised a program that would grad- ually destroy the net by randomly sev- ering the links between units. ÒThe method was meant to emulate the de- polarization of the synapses in biologi- cal systems,Ó Thaler says. After each suc- cessive pass, he examined the output. When about 10 to 60 percent of the connections were destroyed, the net spat out nonsense. But when closer to 90 per- cent of the connections were destroyed, the output began to settle on distinct values. In the case of ThalerÕs eight-unit network, created to model the Òexclu- sive orÓ logic function, much of what was produced was the trained output states 0 and 1. The net sometimes gen- erated what Thaler terms ÒwhimsicalÓ states, that is, values that neither were trained into the net nor would appear in a healthy net. In contrast, untrained networks produced only random num- bers as they died. That an expiring net would produce meaningful gasps is not entirely far- fetched. ÒIt makes sense in terms of a network that has made some stable pat- terns,Ó says David C. Plaut, a psycholo- gist and computer scientist at Carnegie Mellon University who uses artiÞcial neu- ral nets to model brain damage. Indeed, Thaler has a detailed explanation. In a fully trained, functioning network, all the weighted inputs to a particular unit are about the same in magnitude and oppo- site in sign. (In mathspeak, the weights follow a Gaussian distribution, or bell- shaped curve.) The odds are, then, that the sum of several weighted inputs to a unit equal zero. Hence, when the links are broken, the unit might not ÒfeelÓ the loss, because it may have been receiv- ing a total zero signal from them any- way. The few surviving links will often be suÛcient to generate reasonably co- herent output. But concluding that this artiÞcial ex- perience can be extrapolated to human brushes with death is a stretch. ÒNeural networks have got to be a rough approx- imation at best,Ó Plaut notes. The brain is far more sophisticated than neural nets. Furthermore, it is not entirely clear how collections of real neurons die. The death of a few neurons, for instance, may kill oÝ some nearby ones. And the method used to train the neural netsÑ an algorithm called back-propagationÑ is dissimilar to the way the brain learns. Still, the observations suggest that some of the near-death experiences com- monly reported might have a mathe- matical basis. ÒIt may not just be fancy biochemistry,Ó Thaler asserts. He is cur- rently working on more complex net- works, including one that will produce visual images. Any wagers for a light at the end of a long tunnel? ÑPhilip Yam SCIENTIFIC AMERICAN May 1993 33Copyright 1993 Scientific American, Inc.
Fly UP