Grigori Fursin
Grigori Fursin | |
---|---|
Alma mater | |
Known for | MILEPOST GCC, cTuning foundation, Collective Knowledge framework, Collective Mind, Artifact Evaluation at ACM and IEEE conferences |
Awards |
|
Scientific career | |
Fields | Computer engineering Machine learning |
Institutions | |
Thesis | Iterative Compilation and Performance Prediction for Numerical Applications (2004 ) |
Website | fursin |
Grigori Fursin is a British[2] computer scientist, president of the non-profit CTuning foundation, founding member of MLCommons,[3] and co-chair of the MLCommons Task Force on Automation and Reproducibility.[4] His research group created open-source machine learning based self-optimizing compiler, MILEPOST GCC, considered to be the first in the world.[5] At the end of the MILEPOST project he established cTuning foundation to crowdsource program optimisation and machine learning across diverse devices provided by volunteers. His foundation also developed Collective Knowledge Framework and Collective Mind[6] to support open research. Since 2015 Fursin leads Artifact Evaluation at several ACM and IEEE computer systems conferences. He is also a founding member of the ACM taskforce on Data, Software, and Reproducibility in Publication.[7][8][9]
Education
Fursin completed his PhD in computer science at the University of Edinburgh in 2005. While in Edinburgh, he worked on foundations of practical program autotuning and performance prediction.[10]
Notable projects
- Collective Mind - collection of portable, extensible and ready-to-use automation recipes with a human-friendly interface to help the community compose, benchmark and optimize complex AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware.[11][6][12][13]
- Collective Knowledge – open-source framework to help researchers and practitioners organize their software projects as a database of reusable components and portable workflows with common APIs based on FAIR principles,[14] and quickly prototype, crowdsource and reproduce research experiments.
- MILEPOST GCC – open-source technology to build machine learning based compilers.
- Interactive Compilation Interface – plugin framework to expose internal features and optimisation decisions of compilers for external auto tuning and learning.
- cTuning foundation – non-profit research organisation developing open-source tools and common methodology for collaborative and reproducible experimentation.
- Artifact Evaluation - validation of experimental results from published papers at the computer systems and machine learning conferences.[15][16][17]
References
- ^ HiPEAC info 50 (page 8), April 2017, archived from the original on 26 June 2024, retrieved 26 December 2024
- ^ Companies House profile, June 2015, archived from the original on 18 October 2018, retrieved 18 October 2018
- ^ MLCommons press-release, December 2020, archived from the original on 24 May 2024, retrieved 26 December 2024
- ^ MLCommons Task Force on Automation and Reproducibility, June 2022, archived from the original on 31 January 2024, retrieved 26 December 2024
- ^ World's First Intelligent, Open Source Compiler Provides Automated Advice on Software Code Optimization, IBM press-release, June 2009 (link)
- ^ a b Fursin, Grigori (June 2024). "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments". arXiv:2406.16791 [cs.LG].
- ^ "The ACM Task Force on Data, Software, and Reproducibility in Publication". Archived from the original on 6 December 2017. Retrieved 5 December 2017.
- ^ Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137. Archived from the original on 25 December 2022. Retrieved 26 December 2024.
- ^ "ACM TechTalk "Reproducing 150 Research Papers and Testing Them in the Real World: Challenges and Solutions with Grigori Fursin"". Archived from the original on 30 March 2021. Retrieved 11 February 2021.
- ^ Grigori Fursin (July 2004). "PhD thesis". Archived from the original on 23 September 2020. Retrieved 21 May 2017.
- ^ Fursin, Grigori (June 2023). Toward a common language to facilitate reproducible research and technology transfer: challenges and solutions. keynote at the 1st ACM Conference on Reproducibility and Replicability. doi:10.5281/zenodo.8105339. Archived from the original on 10 February 2024. Retrieved 26 December 2024.
- ^ Online catalog of automation recipes developed by MLCommons, archived from the original on 10 February 2024, retrieved 26 December 2024
- ^ HPCWire: MLPerf Releases Latest Inference Results and New Storage Benchmark, September 2023, archived from the original on 21 December 2023, retrieved 26 December 2024
- ^ Fursin, Grigori (October 2020). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common interfaces. Philosophical Transactions of the Royal_Society. arXiv:2011.01149. doi:10.1098/rsta.2020.0211. Retrieved 22 October 2020.
- ^ Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137.
- ^ Fursin, Grigori; Christophe Dubach (June 2014). Community-driven reviewing and validation of publications. Proceedings of TRUST'14 at PLDI'14. arXiv:1406.4020. doi:10.1145/2618137.2618142.
- ^ Childers, Bruce R; Grigori Fursin; Shriram Krishnamurthi; Andreas Zeller (March 2016). Artifact evaluation for publications. Dagstuhl Perspectives Workshop 15452. doi:10.4230/DagRep.5.11.29.