MTU Cork Library Catalogue

Syndetics cover image
Image from Syndetics

Computer-related risks / Peter G. Neumann.

By: Neumann, Peter, 1932-.
Material type: materialTypeLabelBookPublisher: New York : Reading, Mass. : ACM Press ; Addison-Wesley, 1995Description: xv, 367 p. : ill. ; 24 cm.ISBN: 020155805X.Subject(s): Electronic digital computers -- Reliability | Risk managementDDC classification: 005.8
Holdings
Item type Current library Call number Copy number Status Date due Barcode Item holds
General Lending MTU Bishopstown Library Store Item 005.8 (Browse shelf(Opens below)) 1 Available 00014153
Total holds: 0

Enhanced descriptions from Syndetics:

This sobering description of many computer-related failures throughout our world deflates the hype and hubris of the industry. Peter Neumann analyzes the failure modes, recommends sequences for prevention and ends his unique book with some broadening reflections on the future.--Ralph Nader, Consumer AdvocateThis book is much more than a collection of computer mishaps; it is a serious, technically oriented book written by one of the worlds leading experts on computer risks. The book summarizes many real events involving computer technologies and the people who depend on those technologies, with widely ranging causes and effects. It considers problems attributable to hardware, software, people, and natural causes. Examples include disasters (such as the Black Hawk helicopter and Iranian Airbus shootdowns, the Exxon Valdez, and various transportation accidents); malicious hacker attacks; outages of telephone systems and computer networks; financial losses; and many other strange happenstances (squirrels downing power grids, and April Fools Day pranks).Computer-Related Risks addresses problems involving reliability, safety, security, privacy, and human well-being. It includes analyse

Includes bibliographical references (p. 333-344) and index.

Table of contents provided by Syndetics

  • 1 The Nature Of Risks
  • Background on Risks
  • Sources of Risks
  • Adverse Effects
  • Defensive Measures
  • Guide to Summary Tables
  • 2 Reliability And Safety Problems
  • Communication Systems
  • Problems in Space
  • Defense
  • Civil Aviation
  • Trains
  • Ships
  • Control-System Safety
  • Robotics and Safety
  • Medical Health and Safety
  • Electrical Power
  • Computer Calendar Clocks
  • Computing Errors
  • 3 Security Vulnerabilities
  • Security Vulnerabilities and Misuse Types
  • Pest Programs and Deferred Effects
  • Bypass of Intended Controls
  • Resource Misuse
  • Other Attack Methods
  • Comparison of the Attack Methods
  • Classical Security Vulnerabilities
  • Avoidance of Security Vulnerabilities
  • 4 Causes And Effects
  • Weak Links and Multiple Causes
  • Accidental versus Intentional Causes
  • 5 Security And Integrity Problems
  • Intentional Misuse
  • Security Accidents
  • Spoofs and Pranks
  • Intentional Denials of Service
  • Unintentional Denials of Service
  • Financial Fraud by Computer
  • Accidental Financial Losses
  • Risks in Computer-Based Elections
  • Jail Security
  • 6 Threats To Privacy And Well-Being
  • Needs for Privacy Protection
  • Privacy Violations
  • Prevention of Privacy Abuses
  • Annoyances in Life, Death, and Taxes
  • What's in a Name?
  • Use of Names as Identifiers
  • 7 A System-Oriented Perspective
  • The Not-So-Accidental Holist: A System View
  • Putting Your Best Interface Forward
  • Distributed Systems
  • Woes of System Development
  • Modeling and Simulation
  • Coping with Complexity
  • Techniques for Increasing Reliability
  • Techniques for Software Development
  • Techniques for Increasing Security
  • Risks in Risk Analysis
  • Risks Considered Global(ly)
  • 8 A Human-Oriented Perspective
  • The Human Element
  • Trust in Computer-Related Systems and in People
  • Computers, Ethics, and the Law
  • Mixed Signals on Social Responsibility
  • Group Dynamics
  • Certification of Computer Professionals
  • 9 Implications And Conclusions
  • Where to Place the Blame
  • Expect the Unexpected!
  • Avoidance of Weak Links
  • Assessment of the Risks
  • Assessment of the Feasibility of Avoiding Risks
  • Risks in the Information Infrastructure
  • Questions Concerning the NII
  • Avoidance of Risks
  • Assessment of the Future

Excerpt provided by Syndetics

Some books are to be tasted, others to be swallowed, and a few to be chewed and digested. Francis Bacon T his book is based on a remarkable collection of mishaps and oddities relating to computer technology. It considers what has gone wrong in the past, what is likely to go wrong in the future, and what can be done to minimize the occurrence of further problems. It may provide meat and potatoes to some readers and tasty desserts to others --- and yet may seem almost indigestible to some would-be readers. However, it should be intellectually and technologically thought-provoking to all. Many of the events described here have been discussed in the on-line computer newsgroup, the Risks Forum Risks to the Public in the Use of Computers and Related Systems (referred to here simply as RISKS ), which I have moderated since its inception in 1985, under the auspices of the Association for Computing (ACM). Most of these events have been summarized in the quarterly publication of the ACM Special Interest Group on Software Engineering (SIGSOFT), Software Engineering Notes (SEN) , which I edited from its beginnings in 1976 through 1993 and to which I continue to contribute the "RISKS" section. Because those sources represent a fascinating archive that is not widely available, I have distilled the more important material and added further discussion and analysis. Most of the events selected for inclusion relate to roles that computers and communication systems play in our lives. Some events exhibit problems with technology and its application; some events illustrate a wide range of human behavior, such as malice, inadvertent actions, incompetence, ignorance, carelessness, or lack of experience; some events are attributable to causes over which we have little control, such as natural disasters. Some of the events are old; others are recent, although some of the newer ones seem strangely reminiscent of earlier ones. Because such events continue to happen and because they affect us in so many different ways, it is essential that we draw realistic conclusions from this collection --- particularly if the book is to help us avoid future disasters. Indeed, the later chapters focus on the technology itself and discuss what can be done to overcome or control the risks. I hope that the events described and the conclusions drawn are such that much of the material will be accessible to readers with widely differing backgrounds. I have attempted to find a middle ground for a diverse set of readers, so that the book can be interesting and informative for students and professionals in the computer field, practitioners and technologists in other fields, and people with only a general interest in technology. The book is particularly relevant to students of software engineering, system engineering, and computer science, for whom it could be used as a companion source. It is also valuable for anyone studying reliability, fault tolerance, safety, or security; some introductory material is included for people who have not been exposed to those topics. In addition, the book is appropriate for people who develop or use computer-based applications. Less technically oriented readers may skip some of the details and instead read the book primarily for its anecdotal material. Other readers may wish to pursue the technological aspects more thoroughly, chasing down relevant cited references --- for historical, academic, or professional reasons. The book is relatively self-contained, but includes many references and notes for the reader who wishes to pursue the details further. Some readers may indeed wish to browse, whereas others may find the book to be the tip of an enormous iceberg that demands closer investigation. In my presentations of the cases, I have attempted to be specific about the causes and actual circumstances wherever specifics were both available and helpful. Inevitably, the exact causes of some of the cases still remain unknown to me. I have also opted to cite actual names, although I realize that certain organizations may be embarrassed by having some of their old dirty laundry hung out yet again. The alternative would have been to make those cases anonymous --- which would have defeated one of the main purposes of the book, namely, to increase reader awareness of the pervasiveness and real-life nature of the problems addressed here. The Organization of the Book Chapter 1 presents an introduction to the topic of computer-related risks. Chapters 2 through 6 consider examples from the wealth of cases, from several perspectives. Chapters 7 and 8 reflect on that experience and consider what must be done to avoid such risks in the future. Chapter 9 provides conclusions. The individual chapters are summarized as follows: Chapter 1, "The Nature of Risks" characterizes the various sources of risks and the types of effects that those risks entail. It also anticipates what might be done to prevent those causes from having serious effects. Chapter 2, "Reliability and Safety Problems", examines the causes and effects in various cases for which reliability problems have been experienced, over a wide range of application areas. Chapter 3, "Security Vulnerabilities", considers what the most prevalent types of security vulnerabilities are, and how they arise. Chapter 4, "Causes and Effects", makes a case for considering reliability problems and security problems within a common framework, by pointing out significant conceptual similarities, as well as by exploring the risks of not having such a common framework. Chapter 5, "Security and Integrity Problems", reviews many cases in which different types of security violations have been experienced, over a variety of application areas. Chapter 6, "Threats to Privacy and Well-Being", discusses threats to privacy, to individual rights, and to personal well-being. Chapter 7, "A System-Oriented Perspective", considers the subject matter of the book from a global system perspective. It considers techniques for increasing reliability, fault tolerance, safety, and security, including the use of good software engineering practice. Chapter 8, "A Human-Oriented Perspective", considers the pervasive roles of people in achieving low-risk systems. Chapter 9, "Implications and Conclusions", considers the foregoing chapters in retrospect. It draws various conclusions, and addresses responsibility, predictability, weak links, validity of assumptions, risks in risk assessment, and risks inherent in the technology. Challenges for the reader are suggested at the end of each chapter. They include both thought-provoking questions of general interest and exercises that may be of concern primarily to the more technically minded reader. They are intended to offer some opportunities to reflect on the issues raised in the book. At the urging of my EditriX, some specific numbers are given (such as number of cases you are asked to examine, or the number of examples you might generate); however, these numbers should be considered as parameters that can be altered to suit the occasion. Students and professors using this book for a course are invited to invent their own challenges. Appendix A provides useful background material. Section A.1 gives a table relating Software Engineering Notes (SEN) volume and issue numbers to dates, which are omitted in the text for simplicity. Section A.2 gives information on how to access relevant on-line sources, including RISKS , PRIVACY , and VIRUS-L newsgroups. Section A.3 suggests some selected further readings. The back of the book includes a glossary of acronyms and terms, the notes referred to throughout the text, an extensive bibliography that is still only a beginning, and the index. How to Read the Book Many different organizations could have been used for the book. I chose to present the experiential material according to threats that relate to specific attributes (notably reliability, safety, security, privacy, and well-being in Chapters 2 through 6), and, within those attributes, by types of applications. Chapters 7 and 8 provide broader perspectives from a system viewpoint and from a human viewpoint, respectively. That order reinforces the principal conclusions of the book and exhibits the diversity, perversity, and universality of the problems encountered. Alternatively, the book could have been organized according to the causes of problems --- for example, the diverse sources of risks summarized in Section 1.2; it could have been organized according to the effects that have been experienced or that can be expected to occur in the future --- such as those summarized in Section 1.3; it could have been organized according to the types of defensive measures necessary to combat the problems inherent in those causes and effects --- such as the diverse types of defensive measures summarized in Section 1.4. Evidently, no one order is best suited to all readers. However, I have tried to help each reader to find his or her own path through the book, and have provided different viewpoints and cross-references. The book may be read from cover to cover, which is intended to be a natural order of presentation. However, a linear order may not be suitable for everyone. A reader with selective interests may wish to read the introductory material of Chapter 1, to choose among those sections of greatest interest in Chapters 2 through 6, and then to read the final three chapters. A reader not particularly interested in the technological details of how the risks might be avoided or reduced can skip Chapter 7. Certain cases recur in different contexts, and are interesting precisely because they illustrate multiple concepts. For example, a particular case might appear in the context of its application (such as communications or space), its types of problems (distributed systems, human interfaces), its requirements (reliability, security), and its implications with respect to software engineering. Certain key details are repeated in a few essential cases so that the reader is not compelled to search for the original mention. Acknowledgments I am deeply indebted to the numerous people who contributed source material to the Risks Forum and helped to make this book possible. My interactions with them and with the newsgroup's countless readers have made the RISKS experience most enjoyable for me. Many contributors are identified in the text. Others are noted in the referenced items from the ACM Software Engineering Notes . I thank Adele Goldberg, who in 1985 as ACM President named me to be the Chairman of the ACM Committee on Computers and Public Policy and gave me the charter to establish what became the Risks Forum 50. Peter Denning, Jim Horning, Nancy Leveson, David Parnas, and Jerry Saltzer are the "old reliables" of the RISKS community; they contributed regularly from the very beginning. I am delighted to be able to include the "CACM Inside Risks" guest columns written by Bob Charette (Section 7.10), Robert Dorsett (Section 2.4.1), Don Norman (Section 6.6), Ronni Rosenberg (Section 8.4), Marc Rotenberg (Section 6.1), and Barbara Simons (Section 9.7). I thank Jack Garman and Eric Rosen for the incisive articles they contributed to Software Engineering Notes , discussing the first shuttle launch problem47 and the 1980 ARPAnet collapse 139, respectively. I also thank Matt Jaffe for his extemporaneous discussion on the Aegis system in response to my lecture at the Fifth International Workshop on Software Specification and Design in 1989. (My summary of his talk appears in 58.) I would like to express my appreciation to John Markoff of The New York Times . Our interactions began long before the Wily Hackers 162, 163 and the Internet Worm 35, 57, 138, 150,159. John has been a media leader in the effort to increase public awareness with respect to many of the concepts discussed in RISKS . I am grateful to many people for having helped me in the quest to explore the risks involved in the design and implementation of computer systems --- especially my 1960s colleagues from the Multics effort, F.J. Corbató, Bob Daley, Jerry Saltzer, and the late E.L. (Ted) Glaser at MIT; and Vic Vyssotsky, Doug McIlroy, Bob Morris, Ken Thompson, Ed David, and the late Joe Ossanna at Bell Laboratories. My interactions over the years with Tony Oettinger, Dave Huffman, and Edsger W. Dijkstra have provided great intellectual stimulation. Mae Churchill encouraged me to explore the issues in electronic voting. Henry Petroski enriched my perspective on the nature of the problems discussed here. Two of Jerry Mander's books were particularly reinforcing 87, 88. Special thanks go to Don Nielson and Mark Moriconi for their support at SRI International (formerly Stanford Research Institute). The on-line Risks Forum has been primarily a pro bono effort on my part, but SRI has contributed valuable resources --- including the Internet archive facility. I also thank Jack Goldberg, who invited me to join SRI's Computer Science Laboratory (CSL) in 1971 and encouraged my pursuits of reliability and security issues in a socially conscious context. Among others in CSL, Teresa Lunt and John Rushby have been particularly thoughtful colleagues. Liz Luntzel provided cheerful assistance throughout. Donn Parker and Bruce Baker provided opportunities for inputs and outputs through their International Information Integrity Institute (I-4) and as part of SRI's Business and Policy Group. Maestro Herbert Blomstedt has greatly enriched my life through his music and teaching over the past 10 years. My Tai Chi teachers Martin and Emily Lee 77 contributed subliminally to the writing of this book, which in a Taoist way seems to have written itself. I thank Lyn Dupré, my high-tech EditriX, for her X-acting X-pertise (despite her predilection for the "staffed space program" and "fisherpersons" --- which I carefully eschewed, in Sections 2.2.1 and 2.6, respectively); the high-TEX Marsha Finley (who claims she did only the dog work in burying the bones of my LaTEX, but whose bark and bite were both terrific); Paul Anagnostopoulos, who transmogrified the LaTEX into ZzTEX; Peter Gordon of Addison-Wesley for his patient goading; Helen Goldstein of Addison-Wesley for her wonderful encouragement and help; and Helen Wythe, who oversaw the production of the book for Addison-Wesley. I am indebted to the anonymous reviewers, who made many useful suggestions --- although some of their diverse recommendations were mutually incompatible, further illustrating the difficulties in trying to satisfy a heterogeneous audience within a single book. I am pleased to acknowledge two marvelous examples of nonproprietary software: Richard Stallman's Gnu Emacs and Les Lamport's LaTEX, both of which were used extensively in the preparation of the text. I would be happy to hear from readers who have corrections, additions, new sagas, or other contributions that might enhance the accuracy and completeness of this book in any future revisions. I thank you all for being part of my extended family, the RISKS community. Peter G. Neumann Palo Alto and Menlo Park, California Much has happened since this book originally went to press. There have been many new instances of the problems documented here, but relatively few new types of problems. In some cases, the technology has progressed a little -- although in those cases the threats, vulnerabilities, risks, and expectations of system capabilities have also escalated. On the other hand, social, economic, and political considerations have not resulted in any noticeable lessening of the risks. Basically, all of the conclusions of the book seem to be just as relevant now -- if not more so. The archives of the Risks Forum have been growing dramatically. Because recent events are always a moving target, a significant body of new material has been assembled and made available on-line, rather that trying to keep the printed form of the book up-to-date in terms of those recent events. An on-line summary of events since the first printing of this book is updated periodically, and is available at http://www.awl.com/cseng/titles/0-201-55805-X/ --- along with pointers to substantial new material that might otherwise go into a revised edition of this book that would be much longer. 020155805XP04062001 Excerpted from Computer-Related Risks by Peter G. Neumann All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.

Author notes provided by Syndetics

About Peter Neumann

Peter G. Neumann (Principal Scientist in the Computer Science Laboratory of SRI International) runs the popular and provocative on-line Internet newsgroup, The Risks Forum, which he started in 1985. He also writes the widely read "Inside Risks" column in the Communications of the ACM. Running RISKS is a sideline to his research and development interests, which include computer hardware and software, systems, networks, and communications, as well as security, reliability, and safety--and how to attain them. He is a Fellow of both the ACM and the IEEE. He is often the first person called when computer disasters occur.



020155805XAB04062001

Powered by Koha