Издательство Cambridge University Press, 2009, -605 pp.
Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fund a mental results provedsince 1990 alone could fill a book: These include new probabilistic definitions of classical complexity classes (IP=PSPACE and the PCP theorems) and their implications for the field of approximation algorithms, Shor’s algorithm to factor integers using a quantum computer, an understanding of why current approaches to the famous P versus NP will not be successful, a theory of derandomization and pseudorandomness based upon computational hardness, and beautiful constructions of pseudorandom objects such as extractors and expanders.
This book aims to describe such recent achievements of complexity theory in the context of more classical results. It is intended to serve both as a textbook and as a reference for self-study. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal in mind. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a Web site for this book at http://www.cs.princeton.edu/theory/complexity with related auxiliary material, including detailed teaching plans for courses based on this book, a draft of all the book’s chapters, and links to other online resources covering related topics.
Throughout the book we explain the context in which a certain notion is useful, and why things are defined in a certain way. We also illustrate key definitions with examples. To keep the text flowing, we have tried to minimize bibliographic references, except when results have acquired stand ard names in the literature, or when we felt that providing some history on a particular result serves to illustrate its motivation or context. (Every chapter has a notes section that contains a fuller, though still brief, treatment of the relevant works.) When faced with a choice, we preferred to use simpler definitions and proofs over showing the most general or most optimized result. The book is divided into three parts:
Part I: Basic complexity classes. This part provides a broad introduction to the field. Starting from the definition of Turing machines and the basic notions of computability theory, it covers the basic time and space complexity classes andalso includes a few more modern topics such as probabilistic algorithms, interactive proofs, cryptography, quantum computers, and the PCP Theorem and its applications.
Part II: Lower bounds on concrete computational models. This part describes lower bounds on resources required to solve algorithmic tasks on concrete models such as circuits and decision trees. Such models may seem at first sight very different from Turing machines, but upon looking deeper, one finds interesting interconnections.
Part III: Advanced topics. This part is largely devoted to developments since the late 1980s. It includes counting complexity, average case complexity, hardness amplification, derandomization and pseudorandomness, the proof of the PCP theorem, and natural proofs.
Notational conventions
Part One: Basic Complexity Classes
The computational model—and why it doesn’t matter
NP and NP completeness
Diagonalization
Space complexity
The polynomial hierarchy and alternations
Boolean circuits
Randomized computation
Interactive proofs
Cryptography
Quantum computation
PCP theorem and hardness of approximation: An introduction
Part Two: Lower Bounds for Concrete Computational Models
Decision trees
Communication complexity
Circuit lower bounds: Complexity theory’s Waterloo
Proof complexity
Algebraic computation models
Part Three: Advanced Topics
Complexity of counting
Average case complexity: Levin’s theory
Hardness amplification and error-correctingcodes
Derandomization
Pseudorandom constructions: Expanders and extractors
Proofs of PCP theorems and the Fourier transform technique
Why are circuit lower bounds so difficult?
A Mathematical background