学术, 文献整理

常见的机器学习文献

这里记录一些常见的机器学习文献。

说明:主要参考综述文献、文章的引言,以及使用搜索引擎和文献关联工具等。内容以早期的引用率高的文献为主,按年份时间进行排列,同年份的不分先后。列表大概率不完整,存在遗漏,仅供参考,可能不定期补充更新。如需阅读更多或者最新的文献,可自行搜索查找。

一、论文

  1. 1943 - Warren S. McCulloch et al. - A logical calculus of the ideas immanent in nervous activity https://link.springer.com/article/10.1007/BF02478259
  2. 1958 - Rosenblatt - The perceptron: A probabilistic model for information storage and organization in the brain https://psycnet.apa.org/doiLanding?doi=10.1037%2Fh0042519
  3. 1982 - J. J. Hopfield - Neural networks and physical systems with emergent collective computational abilities https://www.pnas.org/doi/10.1073/pnas.79.8.2554
  4. 1986 - Rumelhart et al. - Learning representations by back-propagating errors https://www.nature.com/articles/323533a0
  5. 1989 - Hornik et al. - Multilayer feedforward networks are universal approximators https://www.sciencedirect.com/science/article/abs/pii/0893608089900208
  6. 1990 - Elman - Finding Structure in Time https://onlinelibrary.wiley.com/doi/10.1207/s15516709cog1402_1
  7. 1997 - Sepp Hochreiter et al. - Long Short-Term Memory https://ieeexplore.ieee.org/abstract/document/6795963
  8. 1998 - Lecun et al. - Gradient-based learning applied to document recognition https://ieeexplore.ieee.org/document/726791
  9. 2009 - Deng et al. - ImageNet A large-scale hierarchical image database https://ieeexplore.ieee.org/document/5206848
  10. 2009 - Krizhevsky - Learning Multiple Layers of Features from Tiny Images https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf
  11. 2010 - Nair and Hinton - Rectified linear units improve restricted boltzmann machines https://dl.acm.org/doi/10.5555/3104322.3104425
  12. 2013 - Mikolov et al. - Efficient Estimation of Word Representations in Vector Space https://arxiv.org/abs/1301.3781
  13. 2013 - Mikolov et al. - Distributed Representations of Words and Phrases and their Compositionality https://arxiv.org/abs/1310.4546
  14. 2014 - Lin et al. - Microsoft COCO Common Objects in Context https://link.springer.com/chapter/10.1007/978-3-319-10602-1_48
  15. 2014 - Girshick et al. - Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation https://ieeexplore.ieee.org/document/6909475
  16. 2014 - Sutskever et al. - Sequence to Sequence Learning with Neural Networks https://dl.acm.org/doi/10.5555/2969033.2969173
  17. 2014 - Srivastava et al. - Dropout a simple way to prevent neural networks from overfitting https://dl.acm.org/doi/10.5555/2627435.2670313
  18. 2014 - Goodfellow et al. - Generative Adversarial Networks https://arxiv.org/abs/1406.2661
  19. 2014 - Cho et al. - Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation https://arxiv.org/abs/1406.1078v3
  20. 2014 - Pennington et al. - GloVe Global Vectors for Word Representation https://aclanthology.org/D14-1162/
  21. 2015 - Simonyan and Zisserman - Very Deep Convolutional Networks for Large-Scale Image Recognition https://arxiv.org/abs/1409.1556
  22. 2015 - Ronneberger et al. - U-Net Convolutional Networks for Biomedical Image Segmentation https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28
  23. 2015 - Szegedy et al. - Going deeper with convolutions https://ieeexplore.ieee.org/document/7298594
  24. 2015 - Ioffe and Szegedy - Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift https://proceedings.mlr.press/v37/ioffe15.html
  25. 2015 - Girshick - Fast R-CNN https://ieeexplore.ieee.org/document/7410526
  26. 2015 - He et al. - Delving Deep into Rectifiers Surpassing Human-Level Performance on ImageNet Classification https://ieeexplore.ieee.org/document/7410480
  27. 2015 - Russakovsky et al. - ImageNet Large Scale Visual Recognition Challenge https://link.springer.com/article/10.1007/s11263-015-0816-y
  28. 2016 - He et al. - Deep Residual Learning for Image Recognition https://ieeexplore.ieee.org/document/7780459
  29. 2016 - Szegedy et al. - Rethinking the Inception Architecture for Computer Vision https://ieeexplore.ieee.org/document/7780677
  30. 2016 - Bahdanau et al. - Neural Machine Translation by Jointly Learning to Align and Translate https://arxiv.org/abs/1409.0473
  31. 2017 - Shelhamer et al. - Fully Convolutional Networks for Semantic Segmentation https://ieeexplore.ieee.org/document/7478072
  32. 2017 - Krizhevsky et al. - ImageNet classification with deep convolutional neural networks https://dl.acm.org/doi/10.1145/3065386
  33. 2017 - Ren et al. - Faster R-CNN Towards Real-Time Object Detection with Region Proposal Networks https://ieeexplore.ieee.org/abstract/document/7485869
  34. 2017 - Huang et al. - Densely Connected Convolutional Networks https://ieeexplore.ieee.org/document/8099726
  35. 2017 - He et al. - Mask R-CNN https://ieeexplore.ieee.org/document/8237584
  36. 2017 - Lin et al. - Feature Pyramid Networks for Object Detection https://ieeexplore.ieee.org/document/8099589
  37. 2017 - Kingma and Ba - Adam A Method for Stochastic Optimization https://arxiv.org/abs/1412.6980
  38. 2017 - Vaswani et al. - Attention Is All You Need https://arxiv.org/abs/1706.03762
  39. 2018 - Radford et al. - Improving Language Understanding by Generative Pre-Training https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf 
  40. 2018 - Peters et al. - Deep Contextualized Word Representations https://aclanthology.org/N18-1202/
  41. 2019 - Devlin et al. - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://aclanthology.org/N19-1423/
  42. 2019 - Liu et al. - RoBERTa A Robustly Optimized BERT Pretraining Approach https://arxiv.org/abs/1907.11692
  43. 2019 - Radford et al. - Language Models are Unsupervised Multitask Learners https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
  44. 2020 - Brown et al. - Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165
  45. 2021 - Dosovitskiy et al. - An Image is Worth 16x16 Words Transformers for Image Recognition at Scale https://arxiv.org/abs/2010.11929
  46. 2021 - Radford et al. - Learning Transferable Visual Models From Natural Language Supervision https://arxiv.org/abs/2103.00020
  47. 2023 - Raffel et al. - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer https://arxiv.org/abs/1910.10683
  48. ……

二、综述

  1. 2015 - LeCun et al. - Deep learning https://www.nature.com/articles/nature14539
  2. 2015 - Schmidhuber - Deep learning in neural networks An overview https://www.sciencedirect.com/science/article/pii/S0893608014002135
  3. 2021 - Alzubaidi et al. - Review of deep learning concepts, CNN architectures, challenges, applications, future directions https://journalofbigdata.springeropen.com/articles/10.1186/s40537-021-00444-8
  4. 2023 - Zhao et al. - A Survey of Large Language Models https://arxiv.org/abs/2303.18223
  5. ……
18 次浏览

【说明:本站主要是个人的一些笔记和代码分享,内容可能会不定期修改。为了使全网显示的始终是最新版本,这里的文章未经同意请勿转载。引用请注明出处:https://www.guanjihuan.com

发表评论

您的电子邮箱地址不会被公开。 必填项已用 * 标注

Captcha Code