Giter Site home page Giter Site logo

bd2018's Introduction

Big Data and Deep Learning Systems

Fall 2018

Announcements

Schedule (TBD)

Week Lecture Homework/Project
Week1 9.4/6 Introduction. Resource Manager: YARN, Mesos, Borg
Week2 9.11/13 Meta-framework: REEF, Dataflow processing: MR, Dryad
Week3 9.18/20 Dataflow processing: Spark, Tez, Vortex, Naiad HW1 out
Week4 9.25(추석)/27 Programming: Hive, DryadLINQ, Spark/Shark, Pig, FlumeJava, Beam Team formation & project proposal due
Week5 10.2/4 Stream processing: SparkStreaming, Storm, Heron, Flink, MIST
Week6 10.9/11 Stream processing: SparkStreaming, Storm, Heron, Flink, MIST HW1 due, HW2 out
Week7 10.16/18 ML/DL framework: Parameter server/Tensorflow
Week8 10.23/10.25 DL framework - Tensorflow/Caffe2/Torch Project progress presentation (11.1)
Week9 10.30/11.1 DL framework - Tensorflow/Caffe2/Torch
Week10 11.6/8 DL framework - Tensorflow/Caffe2/Torch HW2 due
Week11 11.13/15 Graph processing - Pregel, GraphLab, X-Stream, Arabesque
Week12 11.20/22 Graph processing - Pregel, GraphLab, X-Stream, Arabesque Survey paper due (covering >= 5 papers)
Week13 11.27/29 DS - GFS, Bigtable, Dynamo
Week14 12.4/6 Coordination - Chubby, Zookeeper
Week15 12.11/13 TBD, Project presentation
Week16 12.18 Project presentation Project report due - 12.20

Reading list

Resource management

  • YARN. Apache Hadoop YARN: Yet Another Resource Negotiator. Vinod Kumar Vavilapalli, Arun C Murthy, Chris Douglas, Sharad Agarwal, Mahadev Konar, Robert Evans, Thomas Graves, Jason Lowe, Hitesh Shah, Siddharth Seth, Bikas Saha, Carlo Curino, Owen O’Malley, Sanjay Radia, Benjamin Reed, Eric Baldeschwieler. SOCC 2013.
  • Mesos. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, Ion Stoica. NSDI 2011.
  • Borg. Large-scale cluster management at Google with Borg. Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, John Wilkes. EuroSys 2015.

Meta-framework

  • REEF. Apache REEF: Retainable Evaluator Execution Framework. Byung-Gon Chun, Tyson Condie, Yingda Chen, Brian Cho, Andrew Chung, Carlo Curino, Chris Douglas, Matteo Interlandi, Beomyeol Jeon, Joo Seong Jeong, Gye-Won Lee, Yunseong Lee, Tony Majestro, Dahlia Malkhi, Sergiy Matusevych, Brandon Myers, Mariia Mykhailova, Shravan Narayanamurthy, Joseph Noor, Raghu Ramakrishnan, Sriram Rao, Russell Sears, Beysim Sezgin, Tae-Geon Um, Julia Wang, Markus Weimer, Markus Weimer, Youngseok Yang. ACM TOCS September 2017.

Dataflow Processing Framework

  • MapReduce. MapReduce: Simplified Data Processing on Large Clusters. Jeffrey Dean and Sanjay Ghemawat. OSDI 2004.
  • Dryad. Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks. Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly. Eurosys 2007.
  • Spark. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. NSDI 2012.
  • CIEL. CIEL: a universal execution engine for distributed data-flow computing. Derek G. Murray, Malte Schwarzkopf, Christopher Smowton, Steven Smith, Anil Madhavapeddy, Steven Hand. NSDI 2011.
  • Naiad. Naiad: A Timely Dataflow System. Derek G. Murray, Frank McSherry, Rebecca Isaacs, Michael Isard, Paul Barham, Martin Abadi. SOSP 2013.
  • Tez. Apache Tez: A Unifying Framework for Modeling and Building Data Processing Applications. Bikas Saha, Hitesh Shah, Siddharth Seth, Gopal Vijayaraghavan, Arun Murthy, Carlo Curino. SIGMOD 2015.
  • Optimus. Optimus: A Dynamic Rewriting Framework for Data-Parallel Execution Plans. Qifa Ke, Michael Isard, Yuan Yu. EuroSys 2013.
  • Pado. Pado: A Data Processing Engine for Harnessing Transient Resources in Datacenters. Youngseok Yang, Geon-Woo Kim, Won Wook Song, Yunseong Lee, Andrew Chung, Zhengping Qian, Brian Cho, Byung-Gon Chun. EuroSys 2017.
  • PerfAnalysis. Making Sense of Performance in Data Analytics Frameworks. Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun. NSDI 2015.
  • [Flare]. Flare: Optimizing Apache Spark for Scale-Up Architectures and Medium-Size Data. Gregory Essertel, Ruby Tahboub, James Decker, Kevin Brown, Kunle Olukotun, Tiark Rompf. OSDI 2018.

High-level Data Processing Programming

  • Hive. Hive – A Petabyte Scale Data Warehouse Using Hadoop. Ashish Thusoo, Joydeep Sen Sarma, Namit Jain, Zheng Shao, Prasad Chakka, Ning Zhang, Suresh Antony, Hao Liu and Raghotham Murthy. ICDE 2010.
  • Pig. Pig Latin: A Not-So-Foreign Language for Data Processing. Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins. SIGMOD 2008.
  • FlumeJava. FlumeJava: Easy, Efficient Data-Parallel Pipelines. Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert R. Henry, Robert Bradshaw, Nathan Weizenbaum. PLDI 2010.
  • DryadLINQ. DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language. Yuan Yu, Michael Isard, Dennis Fetterly, Mihai Budiu, Úlfar Erlingsson, Pradeep Kumar Gunda, Jon Currey. OSDI 2008.
  • SCOPE. SCOPE: Easy and Efficient Parallel Processing of Massive Data Sets. Ronnie Chaiken, Bob Jenkins, Per-Åke Larson, Bill Ramsey, Darren Shakib, Simon Weaver, Jingren Zhou. VLDB 2008.
  • Beam. Apache Beam.

Stream Processing

  • Storm. Storm @ Twitter. Ankit Toshniwal, Siddarth Taneja, Amit Shukla, Karthik Ramasamy, Jignesh M. Patel, Sanjeev Kulkarni, Jason Jackson, Krishna Gade, Maosong Fu, Jake Donham, Nikunj Bhagat, Sailesh Mittal, Dmitriy Ryaboy. SIGMOD 2014.
  • Heron. Twitter Heron: Stream Processing at Scale. Sanjeev Kulkarni, Nikunj Bhagat, Maosong Fu, Vikas Kedigehalli, Christopher Kellogg, Sailesh Mittal, Jignesh M. Patel, Karthik Ramasamy, Siddarth Taneja. SIGMOD 2015.
  • SparkStreaming. Discretized Streams: Fault-Tolerant Streaming Computation at Scale. Matei Zaharia, Tathagata Das, Haoyuan Li, Timothy Hunter, Scott Shenker, Ion Stoica. SOSP 2013.
  • Flink. Apache Flink.
  • [FlinkSM]. State management in Apache Flink®: consistent stateful distributed stream processing. Paris Carbone, Stephan Ewen, Gyula Fora, Seif Haridi, Stefan Richter, Kostas Tzoumas. August 2017.
  • StreamScope. StreamScope: Continuous Reliable Distributed Processing of Big Data Streams. Wei Lin, Haochuan Fan, Zhengping Qian, Junwei Xu, Sen Yang, Jingren Zhou, Lidong Zhou. NSDI 2016.
  • MillWheel. MillWheel: Fault-Tolerant Stream Processing at Internet Scale. Tyler Akidau, Alex Balikov, Kaya Bekiroglu, Slava Chernyak, Josh Haberman, Reuven Lax, Sam McVeety, Daniel Mills, Paul Nordstrom, Sam Whittle. VLDB 2013.
  • Dataflow. The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing. Tyler Akidau, Robert Bradshaw, Craig Chambers, Slava Chernyak, Rafael J. Fernandez-Moctezuma, Reuven Lax, Sam McVeety, Daniel Mills, Frances Perry, Eric Schmidt, Sam Whittle. VLDB 2015.
  • Samza. Samza: Stateful Scalable Stream Processing at LinkedIn. Shadi A. Noghabi, Kartik Paramasivam, Yi Pan, Navina Ramesh, Jon Bringhurst, Indranil Gupta, Roy H. Campbell. VLDB 2017.
  • RealtimeFacebook. Realtime Data Processing at Facebook. Guoqiang Jerry Chen, Janet L. Wiener, Shridhar Iyer, Anshul Jaiswal, Ran Lei, Nikhil Simha, Wei Wang, Kevin Wilfong, Tim Williamson, Serhat Yilmaz. SIGMOD 2016.
  • Trill. Trill: A High-Performance Incremental Query Processor for Diverse Analytics. Badrish Chandramouli, Jonathan Goldstein, Mike Barnett, Robert DeLine, Danyel Fisher, John C. Platt, James F. Terwilliger, John Wernsing. VLDB 2014.
  • SEEP. Integrating Scale Out and Fault Tolerance in Stream Processing using Operator State Management. Raul Castro Fernandez, Matteo Migliavacca, Evangelia Kalyvianaki, Peter Pietzuch. SIGMOD 2013.
  • StructuredStreaming. Structured Streaming: A Declarative API for Real-Time Applications in Apache Spark. Michael Armbrust, Tathagata Das, Joseph Torres, Burak Yavuz, Shixiong Zhu, Reynold Xin, Ali Ghodsi, Ion Stoica, Matei Zaharia. SIGMOD 2018.
  • Chi. Chi: A Scalable and Programmable Control Plane for Distributed Stream Processing Systems. Luo Mai, Kai Zeng, Rahul Potharaju, Le Xu, Steve Suh, Shivaram Venkataraman, Paolo Costa, Terry Kim, Saravanan Muthukrishnan, Vamsi Kuppa, Sudheer Dhulipalla, Sriram Rao. VLDB 2018.
  • [ThreeSteps]. Vasiliki Kalavri, John Liagouris, Moritz Hoffmann, Desislava Dimitrova, Matthew Forshaw, Timothy Roscoe. Three steps is all you need: fast, accurate, automatic scaling decisions for distributed streaming dataflows. OSDI 2018.

Machine Learning/Deep Learning

  • FacebookAIInfra. Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law, Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, Xiaodong Wang. HPCA 2018.
  • PS. Scaling Distributed Machine Learning with the Parameter Server. Mu Li, David G. Andersen, Jun Woo Park, Alexander J. Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J. Shekita, and Bor-Yiing Su. OSDI 2014.
  • Petuum. Petuum: A New Platform for Distributed Machine Learning on Big Data. Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, and Yaoliang Yu. KDD 2015.
  • Adam. Project Adam: Building an Efficient and Scalable Deep Learning Training System. Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. OSDI 2014.
  • TensorFlow. TensorFlow: A System for Large-Scale Machine Learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zheng. OSDI 2016.
  • TenorFlowDCF. Dynamic Control Flow in Large-Scale Machine Learning. Yuan Yu, Martin Abadi, Paul Barham, Eugene Brevdo, Mike Burrows, Andy Davis, Jeff Dean, Sanjay Ghemawat, Tim Harley, Peter Hawkins, Michael Isard, Manjunath Kudlur, Rajat Monga, Derek Murray, Xiaoqiang Zheng. EuroSys 2018.
  • RDAG. Improving the Expressiveness of Deep Learning Frameworks with Recursion. Eunji Jeong*, Joo Seong Jeong*, Soojeong Kim, Gyeong-In Yu, Byung-Gon Chun. EuroSys 2018.
  • Parallax. Parallax: Automatic Data-Parallel Training of Deep Neural Networks. Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun. arXiv:1808.02621, August 2018.
  • Caffe2. Caffe2: A New Lightweight, Modular, and Scalable Deep Learning Framework.
  • PyTorch. PyTorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
  • Torch. Torch: A Scientific Computing Framework for LuaJIT.
  • MXNet MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, Zheng Zhang. arXiv:1512.01274v1. Dec. 3, 2015. (Web site: http://mxnet.io)
  • Caffe. Caffe: Convolutional Architecture for Fast Feature Embedding. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell. ACM Multimedia 2014.
  • DistBelief. Large Scale Distributed Deep Networks. Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Andrew Y. Ng. NIPS 2012.
  • BaiduDL. Deep learning with COTS HPC systems. Adam Coates, Brody Huval, Tao Wang, David J. Wu, Andrew Y. Ng, Bryan Catanzaro. ICML 2013.
  • DyNet. DyNet: The Dynamic Neural Network Toolkit. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin. 2017
  • [Ray]. A Distributed Framework for Emerging AI Applications. Robert Nishihara, Philipp Moritz, Michael I. Jordan, Ion Stoica, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul. OSDI 2018.
  • [TVM]. An Automated End-to-End Optimizing Compiler for Deep Learning. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy. OSDI 2018.
  • [Gandiva]. Gandiva: Introspective Cluster Scheduling for Deep Learning. Wencong Xiao, Romil Bhardwaj, Ramachandran Ramjee, Muthian Sivathanu, Nipun Kwatra, Zhenhua Han, Pratyush Patel, Xuan Peng, Hanyu Zhao, Quanlu Zhang, Fan Yang, Lidong Zhou. OSDI 2018.
  • [PRETZEL]. PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems. Yunseong Lee, Alberto Scolari, Byung-Gon Chun, Marco Domenico Santambrogio, Markus Weimer, Matteo Interlandi. OSDI 2018.
  • CellularBatching. Low Latency RNN Inference with Cellular Batching. Pin Gao, Lingfan Yu, Yongwei Wu, Jinyang Li. EuroSys 2018.
  • TPU. In-Datacenter Performance Analysis of a Tensor Processing Unit. Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. ISCA 2017.
  • Theano. Theano: A Python framework for fast computation of mathematical expressions. Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang. arXiv:1605.02688. May 9, 2016.

Graph Processing

  • Pregel. Pregel: A System for Large-Scale Graph Processing. Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. SIGMOD 2010.
  • GraphLab. GraphLab: A New Framework For Parallel Machine Learning. Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, Joseph M. Hellerstein. UAI 2010.
  • DistributedGraphLab. Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud. Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, Joseph M. Hellerstein. VLDB 2012.
  • PowerGraph. PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs. Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, Carlos Guestrin. OSDI 2012.
  • PowerLyra. PowerLyra: Differentiated Graph Computation and Partitioning on Skewed Graphs. Rong Chen, Jiaxin Shi, Yanzhe Chen, Haibo Chen. EuroSys 2015.
  • Gemini. Gemini: A Computation-Centric Distributed Graph Processing System. Xiaowei Zhu, Wenguang Chen, Weimin Zheng, Xiaosong Ma. OSDI 2016.
  • GraphX. GraphX: Graph Processing in a Distributed Dataflow Framework. Joseph E. Gonzalez, Reynold S. Xin, Ankur Dave, Daniel Crankshaw, Michael J. Franklin, Ion Stoica. OSDI 2014.
  • Arabesque. Arabesque: A System for Distributed Graph Mining Extended version. Carlos H. C. Teixeira, Alexandre J. Fonseca, Marco Serafini, Georgos Siganos, Mohammed J. Zaki, Ashraf Aboulnaga. Shorter version appeared at SOSP 2015.
  • Giraph. One Trillion Edges: Graph Processing at Facebook-Scale. Avery Ching, Sergey Edunov, Maja Kabiljo, Dionysios Logothetis, Sambavi Muthukrishnan. VLDB 2015.
  • [ASAP]. ASAP: Fast, Approximate Pattern Mining at Scale. Anand Padmanabha Iyer, Zaoxing Liu, Xin Jin, Shivaram Venkataraman, Vladimir Braverman, Ion Stoica. OSDI 2018.

Distributed Store

  • GFS. The Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. SOSP 2003.
  • Bigtable. Bigtable: A Distributed Storage System for Structured Data. Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber. OSDI 2006.
  • Dynamo. Dynamo: Amazon’s Highly Available Key-value Store. Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels. SOSP 2007.
  • Spanner. Spanner: Google’s Globally-Distributed Database. James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Yasushi Saito, Michal Szymaniak, Christopher Taylor, Ruth Wang, Dale Woodford. OSDI 2012.
  • Memcache. Scaling Memcache at Facebook. Rajesh Nishtala, Hans Fugal, Steven Grimm, Marc Kwiatkowski, Herman Lee, Harry C. Li, Ryan McElroy, Mike Paleczny, Daniel Peek, Paul Saab, David Stafford, Tony Tung, Venkateshwaran Venkataramani. NSDI 2013.
  • TAO. TAO: Facebook’s Distributed Data Store for the Social Graph. Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding,Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, Venkat Venkataramani. USENIX ATC 2013.

Coordination

  • Chubby. The Chubby lock service for loosely-coupled distributed systems. Mike Burrows. OSDI 2006.
  • Zookeeper. ZooKeeper: Wait-free coordination for Internet-scale systems. Patrick Hunt, Mahadev Konar, Flavio P. Junqueira, Benjamin Reed. ATC 2010.

bd2018's People

Contributors

bgchun avatar gyeongin avatar luomai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bd2018's Issues

BeamSQL 질문

Homework.java의 예시에서 applySQLLogic() 안의 임의로 만들어진 PCollection inputTable에 대한 SQL query를 날릴 때 그 table의 이름(PCOLLECTION)은 어떻게 정해지나요?

[HW2 공지] CPU로 학습 시 Parallax branch 설정, 뼈대 코드 수정

기존 HW document와 실습 시간에 빠뜨린 설명이 있어 보충합니다.

  1. CPU만 이용해 학습하실 경우, Parallax master branch를 이용하시지 말고 cpu_enable branch를 이용해 주시기 바랍니다.
    현재 master branch는 GPU를 사용하는 것을 가정하고 있기에, CPU로만 학습하실 경우 몇 가지 minor한 problem이 발생할 수 있습니다.

  2. 뼈대 코드를 수정했습니다. 다시 다운로드 받아주세요!
    2.1. run_horovod.py에서 처음에 parameter들을 broadcast해 sync하는 부분이 빠져있어 추가했습니다.
    2.2. run_tf.py에서 synchronous training을 하도록 tf.train.SyncReplicaOptimizer를 추가했습니다. 사실 이렇게 바뀐 현재 코드의 execution도 Horovod / Parallax와는 미묘하게 다릅니다. 이는 SyncReplicaOptimizer의 특성 때문이며, 과제 수행 하실 때 vanilla TF (PS architecture) 코드를 SyncReplicaOptimizer를 이용해서 짜시면 Horovod / Parallax과 동일한 result가 나올 수 없습니다. 왜 다르게 나오는지를 report에 분석해 주셔도 좋고, 다른 방법을 이용해 (hint: tf.FIFOQueue & tf.ConditionalAccumulator) Horovod / Parallax와 동일한 result가 나오도록 코드를 짜 주셔도 좋습니다. Result가 다르게 나오는 이유에 대한 힌트는 TF OSDI paper의 Section 4.4에 있습니다.
    2.3. Deterministic한 execution을 할 수 있도록 model 정의를 수정하였고 input data shuffle을 disable 하였습니다.
    2.4. 각 script를 수행하기 위한 example command와, 수행 결과 생성된 chekpoint file을 examine 할 수 있는 command를 주석의 형태로 추가하였습니다.

parallax 실행 error 문의

안녕하세요.

현재 CPU만 사용하는 환경에서 테스트를 하고 있는데요

horovod와 tensorflow 동작은 모두 확인했는데 parallax 실행 시

image

다음과 같은 에러가 발생하면서 수행이 되지 않습니다.

조교님 예제 코드도 마찬가지의 에러가 발생하는거보니 뭔가 환경 세팅의 문제인가해서 문의드립니다.

final project report, code, survey paper reminder

한 학기 동안 공부하느라 수고 많이 했습니다.

혹시 아직 final project report, code (repo url?)를 안 낸 팀은 나에게 메일로 제출해 주어요.

혹시 survey paper 아직 안 낸 학생도 제출하도록 해요.

HW2 마감 변경 5일 -> 9일

오늘 수업 시간에서 많은 학생들이 HW2 마감 변경을 요청해서 마감을 5일 6시에서 9일 6시로 변경합니다.

HW2 마감 관련

HW2 마감은 5일 저녁 6시입니다.

연장을 요청하는 학생들이 있어서 5일 저녁 6시부터 9일 저녁 6시까지 추가로 더 받기로 했습니다. 이 경우에는 late penalty가 적용됩니다.

9일 저녁 6시 이후에는 숙제 제출을 받지 않습니다.

Grade posted

성적 올렸습니다.
한 학기 동안 수고 많이 했습니다.

데이터셋 점수 관련 질문

데이터셋 점수(2/2)와 관련하여 질문드립니다.

문제에서 2개의 데이터셋을 찾아 프로세싱하라 되어있는데요.

하나의 데이터셋에서 2개의 .csv 파일을 읽어 2개의 table을 프로세싱하여 조인하는 경우도 full credits를 받을 수 있는 것인가요?

아니면 하나의 key를 공유하는 서로 다른 2개의 dataset들을 찾아야만 하는 것인가요?

[HW1] Submission Repository Name Convention

https://github.com/YOUR_GITHUB_ID/bd17f-YOUR_NAME 이라고 되어 있는데,
bd17f 로 하는게 맞나요?
그리고 이름은 etl에 있는 영문 이름과 동일하면 되나요?

BeamSQL on Nemo 문제 발생

beam sql 사용하는 숙제에서
spark에서는 결과가 잘 나오는데 nemo에서는 아래와 같은 에러가 발생하면서 종료됩니다.

beam sql 사용하지 않는 코드는 문제 없이 동작합니다.

프로젝트 repo 입니다. tantara/bd18f-taekmin-kim

 WARN 11-01 21:17:36,768 Pipeline:585 [main] - The following transforms do not have stable unique names: HomeworkBaseball.ParseCSVLinesAsRow, TextIO.Read, SqlTransform
 INFO 11-01 21:17:37,303 JobLauncher:189 [main] - Waiting for the driver to be ready
 INFO 11-01 21:17:37,303 JobLauncher:197 [main] - Launching DAG...
 INFO 11-01 21:17:37,453 JobLauncher:210 [main] - Waiting for the DAG to finish execution
Nov 01, 2018 9:17:45 PM org.apache.reef.runtime.common.launch.REEFUncaughtExceptionHandler uncaughtException
심각: Thread RuntimeMaster thread threw an uncaught exception.
org.apache.nemo.common.exception.UnrecoverableFailureException: java.lang.Exception: The plan failed on Stage7-0-0 in Executor1
	at org.apache.nemo.runtime.master.scheduler.BatchScheduler.onTaskStateReportFromExecutor(BatchScheduler.java:195)
	at org.apache.nemo.runtime.master.RuntimeMaster.handleControlMessage(RuntimeMaster.java:370)
	at org.apache.nemo.runtime.master.RuntimeMaster.access$100(RuntimeMaster.java:76)
	at org.apache.nemo.runtime.master.RuntimeMaster$MasterControlMessageReceiver.lambda$onMessage$45(RuntimeMaster.java:331)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception: The plan failed on Stage7-0-0 in Executor1
	... 7 more

REEF InjectionException Error

Spark에서는 정상 작동이 확인된 Beam 코드를 Nemo에서 실행 중 REEF에서 InjectionException (Could not invoke constructor: new NettyMessagingTransport) 이 발생하였습니다.

정상 종료되지 않은 앞서 실행된 Nemo와 포트가 충돌이 발생된다 생각됩니다.

이런 경우 어떻게 처리 할 수 있을까요? 아래 에러 메시지를 첨부하겠습니다.

java.lang.IllegalStateException: TcpPortProvider could not find a free port.
...
Caused by: org.apache.reef.wake.remote.transport.exception.TransportRuntimeException: Cannot bind to port 0
...
Caused by: java.lang.IllegalStateException: TcpPortProvider could not find a free port.
	at org.apache.reef.wake.remote.transport.netty.NettyMessagingTransport.<init>(NettyMessagingTransport.java:182)

p.s. 조금 전 다시 확인 결과 시간이 지나 포트가 release된 거 같습니다. 현재는 다시 정상 작동이 되는 상태입니다. 감사합니다.

Bazel 빌드시 Parallax target 생성 오류

Ubuntu 16.04 (2 cores, 8G RAM, *Without GPU) 환경에서
Bazel을 이용하여 Parallax cpu_enable branch에서 TensorFlow v1.11, Horovod 0.11.2 (OpenMPI 3.0.0)를 정상 build 하여 pip에 추가하였습니다.

installation.md의 마지막 부분 Install Parallax 스크립트 실행 중 Parallax pip package가 생성은 되지만 빌드시 타겟 파일이 생성되지 않아 질문을 드립니다.

#22 를 참고하여 Bazel 0.18.1을 사용하였습니다.

> ./parallax # ~/parallax/parallax
> bazel build //parallax/util:build_pip_package
> bazel-bin/parallax/util/build_pip_package ./target
> pip install ./target/parallax-*.whl

parallax_pip_package_build

Survey Paper 관련하여 문의를 드립니다.

안녕하세요.

서베이 페이퍼를 작성하고 있는데 혹시 서베이 페이퍼 제출을 위해 맞추어야 하는 포맷(글자 크기, 줄 간격 포함)이 있는지 여쭤봅니다.

또, 한글로 작성하여도 괜찮을까요?

감사합니다.

./run_spark 시 compile error 문의

현재 window에서 제공하는 wsl 환경을 사용하고 있습니다.

스켈레톤 코드를 수행해보는데, run_spark 수행 시

[ERROR] You must specify a valid lifecycle phase or a goal in the format : or :[:]:. Available lifecycle phases are: validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy, pre-clean, clean, post-clean, pre-site, site, post-site, site-deploy.

다음과 같은 error가 발생하면서 실행이 되질 않네요.

데이터 다운로드 하는 스크립트도

Saving to: ‘iso_8859-1.txt%0D.1’

iso_8859-1.txt%0D.1 100%[=================================================>] 504 --.-KB/s in 0s

2018-10-25 20:41:51 (30.0 MB/s) - ‘iso_8859-1.txt%0D.1’ saved [504/504]

mv: cannot stat 'iso_8859-1.txt': No such file or directory

이런식으로 에러가나는데, 혹시 wsl 을 사용하면 안되는 환경인지 궁금합니다.

안된다면 docker를 사용한 환경으로 변경하려구요.

skeleton 코드 nemo output

word count 예제에서
run_spark.sh로 돌린 경우에는 work count가 제대로 수행된 것 같은데
run_nemo.sh로 돌린 경우에는 모든 단어의 count가 1로 고정되어있습니다.

vocab의 수는 동일한데 nemo에서 spark와 동일한 결과를 얻으려면 수정해야할 것이 있을까요?

제출 관련 문의

프로젝트는 따로 제출하지않아도되는거지요?

Collaborators 에 추가해두기만하면 된다고 교수님께서 말씀하셨던거같아서요.

Nemo 실행 중 System.out 질문

작성한 BeamSQL을 Spark로 돌리면 System.out이 정상적으로 출력되는데요. 동일한 코드를 Nemo로 돌리면 출력이 되지 않습니다. 하지만 ./nemo_output 은 정상적으로 생성됩니다.

혹시 아래 에러가 있는데 이것들과 관련이 있을까요? 혹 Maven에서 nemo의 모든 dependency들을 처리하지 못하는 것일까요?

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation

p.s. Logger(slf4j)를 사용하지 않고 일반적인 System.out.println()을 사용하였습니다.

HW2 dataset 관련 문의

이번 수업으로 deep-learning을 처음 접하다보니

분산시스템에 알맞는 데이터셋을 고르는데 좀 어려움이 있네요.

혹시 데이터셋을 좀 추천해주실 수 있을까요?

[HW1 공지] pom.xml에 외부 Library dependency 설정 주의사항

  • The UDF must call at least one external Java library

HW Spec 위의 부분에서 pom.xml에 dependency를 추가해야 합니다.

spark-runner 또는 nemo-runner profile밑에 dependency를 추가하면 안됩니다.
그 대신 밑에 부분에서 dependency를 추가 바랍니다. 이 부분은 spark/nemo 둘다 공유하는 부분입니다.

    <dependencies>
        <dependency>
            <groupId>org.apache.beam</groupId>
            <artifactId>beam-sdks-java-core</artifactId>
            <version>${beam.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.beam</groupId>
            <artifactId>beam-sdks-java-extensions-sql</artifactId>
            <version>${beam.version}</version>
        </dependency>
    </dependencies>

그리고 spark/nemo 둘 다 매우 많고 복잡한 external dependency를 갖고 있습니다., HW spec에서 추천하는 것 과 같은 dependency가 가벼운 library들 ( https://en.wikipedia.org/wiki/List_of_numerical_libraries#Java) 을 사용하면 Conflict을 방지할 수 있습니다.

node-js yarn과 hadoop yarn 충돌 issue

저 같은 경우가 많지 않겠지만, 혹시 node-js yarn이 미리 설치되어 있는 환경이라면, hadoop yarn을 설치하고, link해주어야 run_nemo.sh가 제대로 돌아갑니다.

Parallax 실행시 Tensorflow error 문의

안녕하세요,
과제를 진행 중, Parallax에서 실행시에만 발생하는 문제가 생겨 문의 드리고자 합니다.

image
image

위의 코드로 training을 수행시 Parallax 에서만 아래의 문제가 발생합니다.

`(128, 784)
Traceback (most recent call last):
File "/hw2/code/run_parallax.py", line 71, in
cost = autoencoder.partial_fit(sess, batch)
File "/hw2/code/autoencoder/autoencoder_models/Autoencoder.py", line 65, in partial_fit
cost, opt = sess.run((self.cost, self.optimizer), feed_dict={self.x: X})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1148, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1239, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1224, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1296, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1076, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/parallax/core/python/common/session_context.py", line 40, in _parallax_run
return self._run_internal(fetches, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1086, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (784,) for Tensor u'Placeholder:0', which has shape '(?, 784)'

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[59384,1],0]
Exit code: 1
--------------------------------------------------------------------------`

session run에 들어가는 tensor shape이 문제인듯 한데,
다른 실행 코드들도 같은 방식으로 잘 동작하며 feed 전의 모양은 해당하는 tensor의 shape에 알맞은 모양임을 확인하였습니다.

https://github.com/snuspl/parallax/blob/cpu_enable/parallax/parallax/core/python/common/session_context.py#L40

해당 라인 이후로 feed data가 parallax 내부에서 변환되는 부분이 있는지 알고 싶습니다.

Nemo 환경에서 발생하는 에러 문의

안녕하세요.

본 과제 스펙인 UDF Function에서 external lib 사용을 위해

org.apache.commons.text.CaseUtils

이 lib를 사용하기로 결정해서 현재 구현한 상태입니다.

Spark 환경에서는 아무 문제없이 동작하는데, Nemo로 돌릴 시

REEF에서 돌다가 멈춰버리네요. 이 부분에 대해 도움을 주실 수 있을까요?

일부 에러를 첨부합니다.

Oct 30, 2018 1:00:11 AM org.apache.reef.runtime.common.client.RuntimeErrorProtoHandler onNext
경고: socket://172.30.1.57:59819 Runtime Error: Thread main threw an uncaught exception.
Oct 30, 2018 1:00:11 AM org.apache.reef.client.DriverLauncher$RuntimeErrorHandler onNext
심각: Received a resource manager error
Oct 30, 2018 1:00:11 AM org.apache.reef.wake.remote.DefaultErrorHandler onNext
심각: No error handler in RemoteManager
java.lang.RuntimeException: Trying to remove a RunningJob that is unknown: BD_HW_ONE
at org.apache.reef.runtime.common.client.RunningJobsImpl.remove(RunningJobsImpl.java:124)
at org.apache.reef.runtime.common.client.RunningJobsImpl.onRuntimeErrorMessage(RunningJobsImpl.java:95)
at org.apache.reef.runtime.common.client.RuntimeErrorProtoHandler.onNext(RuntimeErrorProtoHandler.java:49)
at org.apache.reef.runtime.common.client.RuntimeErrorProtoHandler.onNext(RuntimeErrorProtoHandler.java:33)
at org.apache.reef.wake.remote.impl.HandlerContainer.onNext(HandlerContainer.java:223)
at org.apache.reef.wake.remote.impl.HandlerContainer.onNext(HandlerContainer.java:39)
at org.apache.reef.wake.remote.impl.OrderedPullEventHandler.onNext(OrderedRemoteReceiverStage.java:160)
at org.apache.reef.wake.remote.impl.OrderedPullEventHandler.onNext(OrderedRemoteReceiverStage.java:141)
at org.apache.reef.wake.impl.ThreadPoolStage$1.run(ThreadPoolStage.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

survey paper, final report 포맷

survey paper, final report는 한국어, 영어 둘 중에 하나로 작성하면 됩니다.

survey paper는 2-column, 5페이지 정도로 작성하면 됩니다.
final report는 2-column, 원하는 페이지 수로 작성하면 됩니다.

Parallax 설치 관련 문의

안녕하세요.

현재 제 환경상 GPU 자원을 가진 시스템을 구축할 수 없어서

현재 CPU만 사용하는 환경으로 세팅하고있습니다.(AWS 사용)

installation guide를 따라 하고있는데, parallax의 tensorflow 설치에서 막혀서요.

GPU를 안쓴다면 NVIDIA Driver, CUDA, NCCL은 모두 패스하고 설치하면 되는거 아닌가요?

parrallax는 cpu_enable 브랜치로 하였고, bazel build 시에

bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

로 빌드를 시도했습니다.

image

다음과 같은 에러가 발생하면서 빌드가 되지 않습니다.

혹시 도움을 주실 수 있을까요.

Survey paper 제출 시간과 방법 문의드립니다.

안녕하세요.
지금 survey paper를 작성중인데 제출 방법에 대한 문의 드립니다.
지난번 과제와 같이 저녁 6시까지 git hub 에 올리고 collaborator를 등록하면 되는건가요??
감사합니다.

single machine, single gpu 에서 parallax test 문의

안녕하세요.

제가 single machine & gpu 환경에서 parallax를 test해보려고 하는데요,
tf랑 hovorod에서는 돌아가는 걸 확인했는데,
parallax에서는
'WARNING: One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock. Stalled ops: HorovodBroadcast_w1_0 [ready ranks: 0]'
라는 오류가 뜹니다.
parallax 는 single gpu에서는 test가 불가한건가요?

감사합니다.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.