Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-22T09:18:12.562Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  17 January 2020

Avrim Blum
Affiliation:
Toyota Technological Institute at Chicago
John Hopcroft
Affiliation:
Cornell University, New York
Ravindran Kannan
Affiliation:
Microsoft Research, India
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[AB15] Awasthi, Pranjal and Balcan, Maria-Florina. Center based clustering: A foundational perspective. In Hennig, Christian, Meila, Marina, Murtagh, Fionn, and Rocci, Roberto, editors, Handbook of Cluster Analysis. CRC Press, 2015.Google Scholar
[ABC+08] Andersen, Reid, Borgs, Christian, Chayes, Jennifer T., Hopcroft, John E., Mirrokni, Vahab S., and Teng, Shang-Hua. Local computation of pagerank contributions. Internet Mathematics, 5(1):2345, 2008.Google Scholar
[ACORT11] Achlioptas, Dimitris, Coja-Oghlan, Amin, and Ricci-Tersenghi, Federico. On the solution-space geometry of random constraint satisfaction problems. Random Structures & Algorithms, 38(3):251268, 2011.CrossRefGoogle Scholar
[AF] Aldous, David and Fill, James. Reversible Markov Chains and Random Walks on Graphs. This should be: www.stat.berkeley.edu/~aldous/RWG/book.html.Google Scholar
[AGKM16] Arora, Sanjeev, Ge, Rong, Kannan, Ravindran, and Moitra, Ankur. Computing a nonnegative matrix factorization – provably. SIAM Journal on Comput., 45(4): 15821611, 2016.Google Scholar
[AK05] Arora, Sanjeev and Kannan, Ravindran. Learning mixtures of separated nonspherical Gaussians. Annals of Applied Probability, 15(1A):6992, 2005. Preliminary version in ACM Symposium on Theory of Computing (STOC) 2001.CrossRefGoogle Scholar
[Alo86] Alon, Noga. Eigenvalues and expanders. Combinatorica, 6:8396, 1986.Google Scholar
[AM05] Achlioptas, Dimitris and McSherry, Frank. On spectral learning of mixtures of distributions. In Conference on Learning Theory (COLT), pages 458469, 2005.CrossRefGoogle Scholar
[AMS96] Alon, Noga, Matias, Yossi, and Szegedy, Mario. The space complexity of approximating the frequency moments. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pages 20–29. ACM, 1996.Google Scholar
[AN72] Athreya, Krishna and Ney, P. E.. Branching Processes, volume 107. Springer, 1972.CrossRefGoogle Scholar
[AP03] Achlioptas, Dimitris and Peres, Yuval. The threshold for random k-SAT is 2k (ln 2 - o(k)). In ACM Symposium on Theory of Computing (STOC), pages 223–231, 2003.Google Scholar
[Aro11] Arora, Sanjeev, Hazan, Elad, and Kale, Satyen. The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1):121164, 2012.Google Scholar
[Arr50] Arrow, Kenneth J.. Adifficulty in the concept of social welfare. Journal of Political Economy, 58(4):328346, 1950.Google Scholar
[AS08] Alon, Noga and Spencer, Joel H.. The Probabilistic Method. Third edition. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons Inc., 2008.Google Scholar
[AV07] Arthur, David and Vassilvitskii, Sergei. k-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027–1035. Society for Industrial and Applied Mathematics, 2007.Google Scholar
[BA] Barabási, Albert-László and Albert, Réka. Emergence of scaling in random networks. Science, 286(5439):509512, 1999.Google Scholar
[BB10] Balcan, Maria-Florina and Blum, Avrim. A discriminative model for semi-supervised learning. Journal of the ACM, 57(3):19:119:46, 2010.CrossRefGoogle Scholar
[BBB+13] Balcan, Maria-Florina, Borgs, Christian, Braverman, Mark, Chayes, Jennifer, and Teng, Shang-Hua. Finding endogenously formed communities. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 767783. Society for Industrial and Applied Mathematics, 2013.Google Scholar
[BBG13] Balcan, Maria-Florina, Blum, Avrim, and Gupta, Anupam. Clustering under approximation stability. Journal of the ACM (JACM), 60(2):8, 2013.CrossRefGoogle Scholar
[BBIS16] Balyo, Tomáš, Biere, Armin, Iser, Markus, and Sinz, Carsten. SAT race 2015. Artificial Intelligence, 241:4565, 2016.Google Scholar
[BBK14] Bansal, Trapit, Bhattacharyya, Chiranjib, and Kannan, Ravindran. A provable SVD-based algorithm for learning topics in dominant admixture corpus. In Advances in Neural Information Processing Systems 27 (NIPS), pages 19972005, 2014.Google Scholar
[BBL09] Balcan, Maria-Florina, Beygelzimer, Alina, and Langford, John. Agnostic active learning. Journal of Computer and System Sciences, 75(1):7889, 2009. Special Issue on Learning Theory. An earlier version appeared in International Conference on Machine Learning 2006.CrossRefGoogle Scholar
[BBV08] Balcan, Maria-Florina, Blum, Avrim, and Vempala, Santosh. A discriminative framework for clustering via similarity functions. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, pages 671680. ACM, 2008.Google Scholar
[BEHW87] Blumer, Anselm, Ehrenfeucht, Andrzej, Haussler, David, and Warmuth, Manfred K.. Occam’s razor. Information Processing Letters, 24:377380, 1987.Google Scholar
[BEHW89] Blumer, Anselm, Ehrenfeucht, Andrzej, Haussler, David, and Warmuth, Manfred K.. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929965, 1989.Google Scholar
[Ben09] Bengio, Yoshua. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1127, 2009.CrossRefGoogle Scholar
[BGG97] Burrus, C. Sidney, Gopinath, Ramesh A., and Guo, Haitao. Introduction to Wavelets and Wavelet Transforms: A Primer. Pearson, 1997.Google Scholar
[BGMZ97] Broder, Andrei Z., Glassman, Steven C., Manasse, Mark S., and Zweig, Geoffrey. Syntactic clustering of the web. Computer Networks and ISDN Systems, 29(8–13): 11571166, 1997.CrossRefGoogle Scholar
[BGV92] Boser, Bernhard E., Guyon, Isabelle M., and Vapnik, Vladimir N.. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on computational learning theory, pp. 144152. ACM, 1992.CrossRefGoogle Scholar
[Bis06] Christopher, M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.Google Scholar
[Ble12] Blei, David M.. Probabilistic topic models. Communications of the ACM, 55(4): 7784, 2012.Google Scholar
[BLG14] Balcan, Maria-Florina, Liang, Yingyu, and Gupta, Pramod. Robust hierarchical clustering. Journal of Machine Learning Research, 15(1):38313871, 2014.Google Scholar
[Blo62] Block, Hans-Dieter. The perceptron: A model for brain functioning. Reviews of Modern Physics, 34:123135, 1962. Reprinted in Neurocomputing, Anderson and Rosenfeld.Google Scholar
[BM98] Blum, Avrim and Mitchell, Tom. Combining labeled and unlabeled data with co-training. In Conference on Learning Theory (COLT). Morgan Kaufmann Publishers, 1998.Google Scholar
[BM02] Bartlett, Peter L. and Mendelson, Shachar. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463482, 2002.Google Scholar
[BM07] Blum, Avrim and Mansour, Yishay. From external to internal regret. Journal of Machine Learning Research, 8:13071324, 2007.Google Scholar
[BMPW98] Brin, Sergey, Motwani, Rajeev, Page, Lawrence, and Winograd, Terry. What can you do with a web in your pocket? Data Engineering Bulletin, 21:3747, 1998.Google Scholar
[BMZ05] Braunstein, Alfredo, Mézard, Marc, and Zecchina, Riccardo. Survey propagation: An algorithm for satisfiability. Random Structures & Algorithms, 27(2):201226, 2005.Google Scholar
[BNJ03] Blei, David M., Ng, Andrew Y., and Jordan, Michael I.. Latent dirichlet allocation. Journal of Machine Learning Research, 3:9931022, 2003.Google Scholar
[Bol01] Bollobás, Béla. Random Graphs. Cambridge, Cambridge University Press, 2001.CrossRefGoogle Scholar
[BSS08] Bayati, Mohsen, Shah, Devavrat, and Sharma, Mayank. Max-product for maximum weight matching: Convergence, correctness, and lp duality. IEEE Transactions on Information Theory, 54(3):12411251, 2008.CrossRefGoogle Scholar
[BT87] Bollobás, Béla and Thomason, Andrew. Threshold functions. Combinatorica, 7(1):3538, 1987.Google Scholar
[BU14] Balcan, Maria-Florina and Urner, Ruth. Active Learning – Modern Learning Theory, pages 1–6. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014.Google Scholar
[BVZ98] Boykov, Yuri, Veksler, Olga, and Zabih, Ramin. Markov random fields with efficient approximations. In Computer vision and pattern recognition, 1998. Proceedings. 1998 IEEE computer society conference on, pages 648655. IEEE, 1998.Google Scholar
[CBFH+97] Cesa-Bianchi, Nicolo, Freund, Yoav, Haussler, David, Helmbold, David P., Schapire, Robert E., and Warmuth, Manfred K.. How to use expert advice. Journal of the ACM, 44(3):427485, 1997.Google Scholar
[CD10] Chaudhuri, Kamalika and Dasgupta, Sanjoy. Rates of convergence for the cluster tree. In Advances in Neural Information Processing Systems, pages 343–351, 2010.Google Scholar
[CF86] Chao, Ming-Te and Franco, John V.. Probabilistic analysis of two heuristics for the 3-satisfiability problem. SIAM J. Comput., 15(4):11061118, 1986.Google Scholar
[CGTS99] Charikar, Moses, Guha, Sudipto, Tardos, Éva, and Shmoys, David B.. A constant-factor approximation algorithm for the k-median problem (extended abstract). In Proceedings of the thirty-first annual ACM symposium on Theory of computing, STOC’99, pages 1–10, New York, NY, USA, 1999. ACM.Google Scholar
[CHK+01] Callaway, Duncan S., Hopcroft, John E., Kleinberg, Jon M., Newman, M. E. J., and Strogatz, Steven H.. Are randomly grown graphs really random? Phys. Rev. E, 64(Issue 4), 2001.Google Scholar
[Chv92] Chvátal, Vašek and Reed, Bruce. Mick gets some (the odds are on his side)(satisfiability). In Proceedings of the 33rd Annual Symposium on Foundations of Computer Science, pp. 620627. IEEE, 1992.Google Scholar
[CLMW11] Candès, Emmanuel J., Li, Xiaodong, Ma, Yi, and Wright, John. Robust principal component analysis? J. ACM, 58(3):11, 2011.Google Scholar
[CR08] Chaudhuri, Kamalika and Rao, Satish. Learning mixtures of product distributions using correlations and independence. In COLT, pages 9–20, 2008.Google Scholar
[CSZ06] Chapelle, Olivier, Schölkopf, Bernhard and Zien, Alexander, editors. Semi- Supervised Learning. MIT Press, Cambridge, MA, 2006.Google Scholar
[CV95] Cortes, Corinna and Vapnik, Vladimir. Support-vector networks. Machine Learning, 20(3):273 – 297, 1995.Google Scholar
[Das11] Dasgupta, Sanjoy. Two faces of active learning. Theor. Comput. Sci., 412(19): 17671781, April 2011.CrossRefGoogle Scholar
[Dau90] Daubechies, Ingrid. The wavelet transform, time-frequency localization and signal analysis. IEEE Trans. Information Theory, 36(5):9611005, 1990.CrossRefGoogle Scholar
[DE03] Donoho, David L. and Elad, Michael. Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proceedings of the National Academy of Sciences, 100(5):21972202, 2003.Google Scholar
[DFK91] Dyer, Martin, Frieze, Alan, and Kannan, Ravindran. A random polynomial time algorithm for approximating the volume of convex bodies. Journal of the Association for Computing Machinery, 38:117, 1991.CrossRefGoogle Scholar
[DFK+99] Drineas, Petros, Frieze, Alan M., Kannan, Ravindran, Vempala, Santosh, and Clustering, V. Vinay. in large graphs and matrices. In SODA, pages 291–299, 1999.Google Scholar
[DG99] Dasgupta, Sanjoy and Gupta, Anupam. An elementary proof of the Johnson-Lindenstrauss lemma. International Computer Science Institute, Technical Report, 22(1):15, 1999.Google Scholar
[DHKS05] Dasgupta, Anirban, Hopcroft, John E., Kleinberg, Jon M., and Sandler, Mark. On learning mixtures of heavy-tailed distributions. In FOCS, pages 491500, 2005.Google Scholar
[DKM06a] Drineas, Petros, Kannan, Ravindran, and Mahoney, Michael W. Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication. SIAM Journal on Computing, 36(1):132157, 2006.Google Scholar
[DKM06b] Drineas, Petros, Kannan, Ravindran, and Mahoney, Michael W.. Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix. SIAM Journal on Computing, 36(1):158183, 2006.CrossRefGoogle Scholar
[Don06] Donoho, David L.. Compressed sensing. IEEE Transactions on Information Theory, 52(4):12891306, 2006.Google Scholar
[DS84] Doyle, Peter G. and Snell, J. Laurie. Random walks and electric networks, volume 22 of Carus Mathematical Monographs. Mathematical Association of America, Washington, DC, 1984.Google Scholar
[DS03] Donoho, David L. and Stodden, Victoria. When does non-negative matrix factorization give a correct decomposition into parts? In Advances in Neural Information Processing Systems 16 (NIPS), pages 11411148, 2003.Google Scholar
[DS07] Dasgupta, Sanjoy and Schulman, Leonard J.. A probabilistic analysis of em for mixtures of separated, spherical gaussians. Journal of Machine Learning Research, 8:203226, 2007.Google Scholar
[DTD06] Donoho, David, Tsaig, Yaakov, and Donoho, David L.. Compressed sensing. IEEE Trans. Inf. Theory, pages 1289–1306, 2006.CrossRefGoogle Scholar
[ER60] Erdös, Paul and Rényi, Alfred. On the evolution of random graphs. Publication of the Mathematical Institute of the Hungarian Academy of Sciences, 5:1761, 1960.Google Scholar
[EVL10] Elmachtoub, Adam N. and Van Loan, Charles F.. From random polygon to ellipse: An eigenanalysis. SIAM Review, 52(1):151170, 2010.Google Scholar
[FCMR08] Filippone, Maurizio, Camastra, Francesco, Masulli, Francesco, and Rovetta, Stefano. A survey of kernel and spectral methods for clustering. Pattern recognition, 41(1):176190, 2008.Google Scholar
[FD07] Frey, Brendan J. and Dueck, Delbert. Clustering by passing messages between data points. Science, 315(5814):972976, 2007.Google Scholar
[Fel68] Feller, William. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, January 1968.Google Scholar
[FK99] Frieze, Alan M. and Kannan, Ravindan. Quick approximation to matrices and applications. Combinatorica, 19(2):175220, 1999.Google Scholar
[FK00] Frey, Brendan J. and Koetter, Ralf. Exact inference using the attenuated max-product algorithm. Advanced Mean Field Methods: Theory and Practice, pp. 213228, MIT Press, 2000.Google Scholar
[FK15] Frieze, A. and Karoński, M.. Introduction to Random Graphs. Cambridge University Press, Cambridge, 2015.Google Scholar
[FKV04] Frieze, Alan, Kannan, Ravindran, and Vempala, Santosh. Fast Monte Carlo algorithms for finding low-rank approximations. Journal of the ACM (JACM), 51(6):10251041, 2004.Google Scholar
[FŁP+51] Florek, Kazimierz, Łukaszewicz, Jan, Perkal, Julian, Steinhaus, Hugo, and Zubrzycki, Stefan. Sur la liaison et la division des points d’un ensemble fini. In Colloquium Mathematicae, volume 2, pages 282285, 1951.Google Scholar
[FM85] Flajolet, Philippe and Martin, G. Nigel. Probabilistic counting algorithms for data base applications. Journal of Computer and System Sciences, 31(2):182209, 1985.Google Scholar
[Fri99] Friedgut, Ehud and Bourgain, Jean. Sharp thresholds of graph properties and the k-sat problem. Journal of the American Mathematical Society, 12(4):10171054, 1999.Google Scholar
[FS96] Frieze, Alan M. and Suen, Stephen. Analysis of two simple heuristics on a random instance of k-sat. J. Algorithms, 20(2):312355, 1996.Google Scholar
[FS97] Freund, Yoav and Schapire, Robert E.. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119139, 1997.CrossRefGoogle Scholar
[GEB15] Gatys, Leon A., Ecker, Alexander S., and Bethge, Matthias. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015.Google Scholar
[Gha01] Ghahramani, Zoubin. An introduction to hidden Markov models and Bayesian networks. International Journal of Pattern Recognition and Artificial Intelligence, 15(1):942, 2001.Google Scholar
[Gib73] Gibbard, Allan. Manipulation of voting schemes: A general result. Econometrica, 41:587601, 1973.Google Scholar
[GKL+15] Gardner, Jacob R., Kusner, Matt J., Li, Yixuan, Upchurch, Paul, Weinberger, Kilian Q., and Hopcroft, John E.. Deep manifold traversal: Changing labels with convolutional features. CoRR, abs/1511.06421, 2015.Google Scholar
[GKP94] Graham, Ronald L., Knuth, Donald E., and Patashnik, Oren. Concrete mathematics – a foundation for computer science (2nd ed.). Addison-Wesley, 1994.Google Scholar
[GKSS08] Gomes, Carla P., Kautz, Henry A., Sabharwal, Ashish, and Selman, Bart. Satisfiability solvers. Foundations of Artificial Intelligence, 3:89134, 2008.Google Scholar
[GLS12] Grötschel, Martin, Lovász, László, and Schrijver, Alexander. Geometric Algorithms and Combinatorial Optimization, volume 2. Springer Science & Business Media, 2012.Google Scholar
[GN03] Gribonval, Rémi and Nielsen, Morten. Sparse decompositions in “incoherent” dictionaries. In Proceedings of the 2003 International Conference on Image Processing, ICIP 2003, Barcelona, Catalonia, Spain, September 14–18, 2003, pages 33–36, 2003.Google Scholar
[Gon85] Gonzalez, Teofilo F.. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293306, 1985.Google Scholar
[GvL96] Golub, Gene H. and van Loan, Charles F.. Matrix computations (3rd ed.). Johns Hopkins University Press, Baltimore, 1996.Google Scholar
[GW95] Goemans, Michel X. and Williamson, David P.. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):11151145, 1995.Google Scholar
[HBB10] Hoffman, Matthew D., Blei, David M., and Bach, Francis R.. Online learning for latent dirichlet allocation. In NIPS, pages 856864, 2010.Google Scholar
[HLSH18] He, Kun, Li, Yingru, Soundarajan, Sucheta, and Hopcroft, John E.. Hidden community detection in social networks. Information Science, 425:92106, 2018.Google Scholar
[HMMR15] Hennig, Christian, Meila, Marina, Fionn Murtagh, and Roberto Rocci. Handbook of Cluster Analysis. CRC Press, 2015.Google Scholar
[HSB+15] He, Kun, Sun, Yiwei, Bindel, David, Hopcroft, John E., and Li, Yixuan. Detecting overlapping communities from local spectral subspaces. In 2015 IEEE International Conference on Data Mining, ICDM 2015, Atlantic City, NJ, USA, November 14–17, 2015, pages 769–774, 2015.CrossRefGoogle Scholar
[HSC+15] He, Kun, Soundarajan, Sucheta, Cao, Xuezhi, Hopcroft, John E., and Huang, Menglong. Revealing multiple layers of hidden community structure in networks. CoRR, abs/1501.05700, 2015.Google Scholar
[IN77] Yudin, David B. and Nemirovskii, Arkadi S.. Informational complexity and efficient methods for solving complex extremal problems. Matekon, 13(3):2545, 1977.Google Scholar
[Jai10] Jain, Anil K.. Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8):651666, 2010.Google Scholar
[Jer98] Jerrum, Mark. Mathematical foundations of the Markov chain Monte Carlo method. In Hochbaum, Dorit, editor, Approximation Algorithms for NP-hard Problems, PWS Publishing Co., 1998.Google Scholar
[JKLP93] Janson, Svante, Knuth, Donald E., Luczak, Tomasz, and Pittel, Boris. The birth of the giant component. Random Struct. Algorithms, 4(3):233359, 1993.Google Scholar
[JLR00] Janson, Svante, Tomasz Ĺuczak, and Andrzej Ruciński. Random Graphs. John Wiley and Sons, 2000.Google Scholar
[Joa99] Joachims, Thorsten. Transductive inference for text classification using support vector machines. In International Conference on Machine Learning, pages 200209, 1999.Google Scholar
[Kan09] Kannan, Ravindran. A new probability inequality using typical moments and concentration results. In FOCS, pages 211–220, 2009.Google Scholar
[Kar90] Karp, Richard M.. The transitive closure of a random digraph. Random Structures and Algorithms, 1(1):7394, 1990.Google Scholar
[KFL01] Kschischang, Frank R., Frey, Brendan J., and Loeliger, Hans-Andrea. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498519, 2001.Google Scholar
[Kha79] Khachiyan, Leonid G.. A polynomial algorithm in linear programming. Akademiia Nauk SSSR, Doklady, 244:10931096, 1979.Google Scholar
[KK10] Kumar, Amit and Kannan, Ravindran. Clustering with spectral norm and the k-means algorithm. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 299–308. IEEE, 2010.Google Scholar
[Kle99] Kleinberg, Jon M.. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604632, 1999.Google Scholar
[Kle00] Kleinberg, Jon M.. The small-world phenomenon: An algorithm perspective. In STOC, pages 163–170, 2000.Google Scholar
[Kle02] Kleinberg, Jon M.. An impossibility theorem for clustering. In NIPS, pages 446–453, 2002.Google Scholar
[KS13] Krivelevich, Michael and Sudakov, Benny. The phase transition in random graphs: A simple proof. Random Struct. Algorithms, 43(2):131138, 2013.Google Scholar
[KV95] Kearns, Michael and Vazirani, Umesh. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1995.Google Scholar
[KV09] Kannan, Ravindran and Vempala, Santosh. Spectral algorithms. Foundations and Trends in Theoretical Computer Science, 4(3–4):157288, 2009.Google Scholar
[KVV04] Kannan, Ravindran, Vempala, Santosh, and Vetta, Adrian. On clusterings: Good, bad and spectral. J. ACM, 51(3):497515, May 2004.Google Scholar
[LCB+04] Lanckriet, Gert R. G., Cristianini, Nello, Bartlett, Peter, Ghaoui, Laurent El, and Jordan, Michael I.. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5(Jan):2772, 2004.Google Scholar
[LHBH15] Li, Yixuan, He, Kun, Bindel, David, and Hopcroft, John E.. Uncovering the small community structure in large networks: A local spectral approach. In Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy, May 18–22, 2015, pages 658668, 2015.Google Scholar
[Lis13] List, Christian. Social choice theory. In Zalta, Edward N., editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2013 edition, 2013.Google Scholar
[Lit87] Littlestone, Nick. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. In 28th Annual Symposium on Foundations of Computer Science, pages 6877. IEEE, 1987.Google Scholar
[Liu01] Liu, Jun. Monte Carlo Strategies in Scientific Computing. Springer, 2001.Google Scholar
[Llo82] Lloyd, Stuart. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129137, 1982.Google Scholar
[LW94] Littlestone, Nick and Warmuth, Manfred K.. The weighted majority algorithm. Information and Computation, 108(2):212261, 1994.Google Scholar
[Mat10] Matoušek, Jiří. Geometric discrepancy, volume 18 of Algorithms and Combinatorics. Springer-Verlag, Berlin, 2010. An illustrated guide, revised paperback reprint of the 1999 original.Google Scholar
[McS01] McSherry, Frank. Spectral partitioning of random graphs. In FOCS, pages 529–537, 2001.Google Scholar
[MG82] Misra, Jayadev and Gries, David. Finding repeated elements. Science of Computer Programming, 2(2):143152, 1982.Google Scholar
[Mit97] Mitchell, Tom M.. Machine Learning. McGraw-Hill, New York, 1997.Google Scholar
[MM02] Manku, Gurmeet Singh and Motwani, Rajeev. Approximate frequency counts over data streams. In Proceedings of the 28th International Conference on Very Large Data Bases, pages 346357. VLDB Endowment, 2002.Google Scholar
[MP69] Minsky, Marvin and Papert, Seymour. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA, 1969.Google Scholar
[MPZ02] Mézard, Marc, Parisi, Giorgio, and Zecchina, Riccardo. Analytic and algorithmic solution of random satisfiability problems. Science, 297(5582):812815, 2002.Google Scholar
[MR95a] Molloy, Michael and Reed, Bruce A.. A critical point for random graphs with a given degree sequence. Random Struct. Algorithms, 6(2/3):161180, 1995.Google Scholar
[MR95b] Motwani, Rajeev and Raghavan, Prabhakar. Randomized Algorithms. Cambridge University Press, Cambridge, 1995.Google Scholar
[MR99] Motwani, Rajeev and Raghavan, Prabhakar. Randomized algorithms. In Algorithms and Theory of Computation Handbook, pages 15-1–15-23. CRC, Boca Raton, FL, 1999.Google Scholar
[MU05] Mitzenmacher, Michael and Upfal, Eli. Probability and Computing – Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge, 2005.Google Scholar
[MV10] Moitra, Ankur and Valiant, Gregory. Settling the polynomial learnability of mixtures of gaussians. In FOCS, pages 93–102, 2010.Google Scholar
[Nov62] Novikoff, Albert B.J.. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, Vol. XII, pages 615–622, 1962.Google Scholar
[OHM06] Orsi, Robert, Helmke, Uwe, and Moore, John B.. A Newton-like method for solving rank constrained linear matrix inequalities. Automatica, 42(11):18751882, 2006.Google Scholar
[Pal85] Palmer, Edgar M.. Graphical evolution. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons Ltd., Chichester, 1985. An introduction to the theory of random graphs, A Wiley-Interscience Publication.Google Scholar
[Par98] Parlett, Beresford N.. The symmetric eigenvalue problem, volume 20 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1998. Corrected reprint of the 1980 original.Google Scholar
[per10] Levin, David Asher, Peres, Yuval, and Wilmer, Elizabeth Lee. Markov Chains and Mixing Times. American Mathematical Society, Providence, RI, 2010.Google Scholar
[RFP10] Recht, Benjamin, Fazel, Maryam, and Parrilo, Pablo A.. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471501, 2010.Google Scholar
[RV99] Kannan, Ravindran and Vinay, V.. Analyzing the Structure of Large Graphs. Bonn, Rheinische Friedrich-Wilhelms-Universität Bonn, 1999.Google Scholar
[Sat75] Satterthwaite, Mark A.. Strategy-proofness and arrows conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10:187217, 1975.Google Scholar
[Sch90] Schapire, Robert E.. Strength of weak learnability. Machine Learning, 5:197227, 1990.Google Scholar
[Sho70] Shor, Naum Z.. Convergence rate of the gradient descent method with dilatation of the space. Cybernetics and Systems Analysis, 6(2):102108, 1970.Google Scholar
[SJ89] Sinclair, Alistair and Jerrum, Mark. Approximate counting, uniform generation and rapidly mixing markov chains. Information and Computation, 82:93133, 1989.Google Scholar
[Sly10] Sly, Allan. Computational transition at the uniqueness threshold. In FOCS, pages 287–296, 2010.Google Scholar
[SN97] Strang, Gilbert and Nguyen, Truong Q.. Wavelets and filter banks. Wellesley-Cambridge Press, 1997.Google Scholar
[SS01] Scholkopf, Bernhard and Smola, Alexander J.. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, 2001.Google Scholar
[SSBD14] Shalev-Shwartz, Shai and Ben-David, Shai. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge, 2014.Google Scholar
[ST04] Spielman, Daniel A. and Teng, Shang-Hua. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. J. ACM, 51(3):385463, 2004.Google Scholar
[STBWA98] Shawe-Taylor, John, Bartlett, Peter L., Williamson, Robert C., and Anthony, Martin. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):19261940, 1998.Google Scholar
[SWY75] Salton, Gerard, Wong, Anita, and Yang, Chung-Shu. A vector space model for automatic indexing. Communications of the ACM, 18(11):613620, 1975.Google Scholar
[Thr96] Thrun, Sebastian. Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer Academic Publishers, Boston, MA, 1996.Google Scholar
[TM95] Thrun, Sebastian and Mitchell, Tom M.. Lifelong robot learning. Robotics and Autonomous Systems, 15(1–2):2546, 1995.Google Scholar
[Val84] Valiant, Leslie G.. A theory of the learnable. In STOC, pages 436–445, 1984.Google Scholar
[Val13] Valiant, Leslie. Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. Basic Books, New York, 2013.Google Scholar
[Vap82] Vapnik, Vladimir N.. Estimation of Dependences Based on Empirical Data. Springer-Verlag, New York, 1982.Google Scholar
[Vap98] Vapnik, Vladimir N.. Statistical Learning Theory. John Wiley and Sons Inc., New York, 1998.Google Scholar
[VC71] Vapnik, Vladimir N. and Chervonenkis, Alexey. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264280, 1971.Google Scholar
[Vem04] Vempala, Santosh. The Random Projection Method. DIMACS, 2004.Google Scholar
[VW02] Vempala, Santosh and Wang, Grant. A spectral algorithm for learning mixtures of distributions. Journal of Computer and System Sciences, pages 113123, 2002.Google Scholar
[War63] Ward, Joe H.. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236244, 1963.Google Scholar
[Wei97] Weiss, Yair. Belief propagation and revision in networks with loops. Technical Report A.I. Memo No. 1616, MIT, 1997.Google Scholar
[WF01] Weiss, Yair and Freeman, William T.. On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47(2):736744, 2001.Google Scholar
[Wil06] Wilf, Herbert S.. Generatingfunctionology. A. K. Peters Series. A. K. Peters, 2006.Google Scholar
[Wis69] Wishart, David. Mode analysis: A generalization of nearest neighbor which reduces chaining effects. Numerical Taxonomy, 76(282–311):17, 1969.Google Scholar
[WS98] Watts, Duncan J. and Strogatz, Steven H.. Collective dynamics of ‘small-world’ networks. Nature, 393 (6684), 1998.Google Scholar
[WW96] Whittaker, Edmund Taylor and Watson, George Neville. A course of modern analysis. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1996. An introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions, reprint of the fourth (1927) edition.Google Scholar
[YFW01] Yedidia, Jonathan S., Freeman, William T., and Weiss, Yair. Bethe free energy, Kikuchi approximations, and belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):22822312, 2005.Google Scholar
[YFW03] Yedidia, Jonathan S., Freeman, William T., and Weiss, Yair. Understanding belief propagation and its generalizations. Exploring Artificial Intelligence in the New Millennium, 8:236239, 2003.Google Scholar
[ZGL03] Zhu, Xiaojin, Ghahramani, Zoubin, and Lafferty, John. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. 20th International Conference on Machine Learning, pages 912–912, 2003.Google Scholar
[Zhu06] Zhu, Xiaojin. Semi-supervised learning literature survey. 2006. Computer Sciences TR 1530 University of Wisconsin–Madison.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×