vice Company, a personal-data seller, told the conference audience how he had been denied a mortgage because of a misleading credit report. Insurers, who subscribe to a centralized medical-infor- mation data base, have been accused of denying coverage to people who have had themselves tested for HIV, even if they test negative, on the theory that be- ing worried enough to take the test im- plies risky behavior. These kinds of potential abuses are becoming more important as lawmakers (and private companies) put personal data to uses for which it was never in- tended. Federal law, for example, now supports using motor vehicle informa- tion to track parents whose child-sup- port payments are late; a single data- base entry can cause computers to issue a warrant for the alleged deadbeatÃs car to be seized. In a striking mismatch of crime and punishment, Massachusetts legislators recently proposed blocking li- cense renewal for citizens with unpaid library Ãnes. ÃWe told them they were crazy,à Lewis notes. If automotive Ãles, containing only name, address, vehicle identiÃcation number and a few other bits of information, can spur such con- troversy, what of medical information? Clinton administration policymakers re- gard automated medical records as a crucial ingredient in cutting health care costsÃRene C. KozloÃ, a project oÃcer at Kunitz and Associates, a health-man- agement information firm, anticipates a Ãconception to death recordà stored on either smart cards or a central data base. Yet there are minimal controls over the Ãve or six dozen people who may handle those records as a result of a visit to a hospital or clinic. Given the problems that have been caused by dis- closure of medical records kept on pa- per, opening such information to mas- sive, uncontrolled computer searches seems unwise, says Janlori Goldman of the American Civil Liberties Union. Privacy advocates have been working for nearly 20 years for a so-called Fair Information Practices Act that would give the subjects of public and private data bases power over how personal in- formation on them is used. Although pro-privacy forces have thus far been unsuccessful in the U.S., they have had more luck in Europe. The British enact- ed ÃData Protectionà rules in 1984, and a privacy directive for the European Community is in draft form. British law requires businesses that keep data bases to register them with the Data Protection Registrar, to ask for peopleÃs consent before gathering infor- mation about them and not to use those data for a purpose diÃerent from the one for which they were collected. ÃIn- formation about others is held in trustà rather than being owned by data-base compilers, says Rosemary Jay, legal ad- viser for the registrar. Jay has brought court challenges against credit-reporting agencies; she has also had to deal with direct marketers seeking access to the registrarÃs list of data bases. ÃCheeky,à she comments. ÃPaul Wallich 32 SCIENTIFIC AMERICAN May 1993 What does a computer do whenit starts to die? The HAL 9000in the Ãlm 2001: A Space Odys- sey burst into a rendition of ÃA Bicycle Built for Two,à a song it had been taught early in life. The memorable scene may not be too far oà the mark. ThatÃs what one researcher found out when he be- gan to Ãkillà a type of computer program known as an artiÃcial neural network. As the network approached death, it be- gan to output not gibberish but informa- tion it had previously learnedÃits sili- con life Ãashed before its eyes, so to speak. The analogy to a so-called near-death experience is irresistible because the cre- ators of artiÃcial neural networks de- sign them to mimic the structure and function of the biological brain. A neu- ral network relies on Ãunitsà to serve as the cell body of a neuron and Ãlinksà be- tween the units to act as the intercon- necting dendrites and axons. The units are typically organized into several lay- ers. A consequence of such an architec- ture is that the network, like the brain, can learn. In a real brain, learning is thought to occur because of changes in the strength of synaptic connections among neurons. Similarly, a neural net- work alters the strength of the links (spe- ciÃcally, the weighting between units) to produce the correct output. Typical- ly a programmer teaches a network by repeatedly presenting training patterns to it [see ÃHow Neural Networks Learn from Experience,à by Geoffrey E. Hin- ton; SCIENTIFIC AMERICAN, September 1992]. Properly trained neural networks can handle diverse tasks, from compressing data to modeling dyslexia. Stephen L. Thaler, a physicist for McDonnell Doug- las, began to explore neural networks a year ago as a way to optimize the pro- cess control of diamond crystal growth. But curiosity led him to start annihilat- ing neural nets as an evening avocation. ÃDaisy, Daisyà Do computers have near-death experiences? Copyright 1993 Scientific American, Inc. He devised a program that would grad- ually destroy the net by randomly sev- ering the links between units. ÃThe method was meant to emulate the de- polarization of the synapses in biologi- cal systems,à Thaler says. After each suc- cessive pass, he examined the output. When about 10 to 60 percent of the connections were destroyed, the net spat out nonsense. But when closer to 90 per- cent of the connections were destroyed, the output began to settle on distinct values. In the case of ThalerÃs eight-unit network, created to model the Ãexclu- sive orà logic function, much of what was produced was the trained output states 0 and 1. The net sometimes gen- erated what Thaler terms Ãwhimsicalà states, that is, values that neither were trained into the net nor would appear in a healthy net. In contrast, untrained networks produced only random num- bers as they died. That an expiring net would produce meaningful gasps is not entirely far- fetched. ÃIt makes sense in terms of a network that has made some stable pat- terns,à says David C. Plaut, a psycholo- gist and computer scientist at Carnegie Mellon University who uses artiÃcial neu- ral nets to model brain damage. Indeed, Thaler has a detailed explanation. In a fully trained, functioning network, all the weighted inputs to a particular unit are about the same in magnitude and oppo- site in sign. (In mathspeak, the weights follow a Gaussian distribution, or bell- shaped curve.) The odds are, then, that the sum of several weighted inputs to a unit equal zero. Hence, when the links are broken, the unit might not Ãfeelà the loss, because it may have been receiv- ing a total zero signal from them any- way. The few surviving links will often be suÃcient to generate reasonably co- herent output. But concluding that this artiÃcial ex- perience can be extrapolated to human brushes with death is a stretch. ÃNeural networks have got to be a rough approx- imation at best,à Plaut notes. The brain is far more sophisticated than neural nets. Furthermore, it is not entirely clear how collections of real neurons die. The death of a few neurons, for instance, may kill oà some nearby ones. And the method used to train the neural netsà an algorithm called back-propagationà is dissimilar to the way the brain learns. Still, the observations suggest that some of the near-death experiences com- monly reported might have a mathe- matical basis. ÃIt may not just be fancy biochemistry,à Thaler asserts. He is cur- rently working on more complex net- works, including one that will produce visual images. Any wagers for a light at the end of a long tunnel? ÃPhilip Yam SCIENTIFIC AMERICAN May 1993 33Copyright 1993 Scientific American, Inc.

"Daisy, Daisy"

  • Published on

  • View

  • Download


vice Company, a personal-data seller, told the conference audience how he had been denied a mortgage because of a misleading credit report. Insurers, who subscribe to a centralized…