Giter Site home page Giter Site logo

xnhyacinth / awesome-llm-long-context-modeling Goto Github PK

View Code? Open in Web Editor NEW
331.0 14.0 13.0 885 KB

📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥

License: MIT License

awsome-list large-language-models long-context-modeling papers survey length-extrapolation compress rag llm benchmark

awesome-llm-long-context-modeling's Introduction

Large Language Model Based Long Context Modeling Papers and Blogs

LICENSE Awesome commit PR GitHub Repo stars

This repo includes papers and blogs about Efficient Transformers, Length Extrapolation, Long Term Memory, Retrieval Augmented Generation(RAG), and Evaluation for Long Context Modeling.

🔥 Must-read papers for LLM-based Long Context Modeling.

Thanks for all the great contributors on GitHub!🔥⚡🔥

Contents

📢 News


📜 Papers

You can directly click on the title to jump to the corresponding PDF link location

1. Survey Papers

  1. Efficient Transformers: A Survey. Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler. Arxiv 2022.

  2. A Survey on Long Text Modeling with Transformers. Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao. Arxiv 2023.

  3. Neural Natural Language Processing for Long Texts: A Survey of the State-of-the-Art. Dimitrios Tsirmpas, Ioannis Gkionis, Ioannis Mademlis, Georgios Papadopoulos. Arxiv 2023.

  4. Advancing Transformer Architecture in Long-Context Large Language Models: A Comprehensive Survey. Yunpeng Huang, Jingwei Xu, Zixu Jiang, Junyu Lai, Zenan Li, Yuan Yao, Taolue Chen, Lijuan Yang, Zhou Xin, Xiaoxing Ma. Arxiv 2023.

        GitHub Repo stars

  1. Length Extrapolation of Transformers: A Survey from the Perspective of Position Encoding. Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bing Qin, Ting Liu. Arxiv 2024.

  2. The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey. Saurav Pawar, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Aman Chadha, Amitava Das. Arxiv 2024.

  3. State Space Model for New-Generation Network Alternative to Transformers: A Survey. Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, Jin Tang. Arxiv 2024.

        GitHub Repo stars

  1. A Survey on Efficient Inference for Large Language Models. Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, Shengen Yan, Guohao Dai, Xiao-Ping Zhang, Yuhan Dong, Yu Wang. Arxiv 2024.

2. Efficient Attention

2.1 Sparse Attention

  1. Generating Long Sequences with Sparse Transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever. Arxiv 2019.

  2. Blockwise selfattention for long document understanding. Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, Jie Tang. EMNLP 2020.

        GitHub Repo stars

  1. Longformer: The Long-Document Transformer. Iz Beltagy, Matthew E. Peters, Arman Cohan. Arxiv 2020.

        GitHub Repo stars

  1. ETC: Encoding Long and Structured Inputs in Transformers. Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang. EMNLP 2020.

  2. Big Bird: Transformers for Longer Sequences. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. NeurIPS 2020.

        GitHub Repo stars

  1. Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. ICLR 2020.

        GitHub Repo stars

  1. Sparse Sinkhorn Attention. Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, Da-Cheng Juan. ICML 2020.

        GitHub Repo stars

  1. Sparse and continuous attention mechanisms. André F. T. Martins, António Farinhas, Marcos Treviso, Vlad Niculae, Pedro M. Q. Aguiar, Mário A. T. Figueiredo. NIPS 2020.

  2. Efficient Content-Based Sparse Attention with Routing Transformers. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier. TACL 2021.

        GitHub Repo stars

  1. LongT5: Efficient text-to-text transformer for long sequences. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. NAACL 2022.

        GitHub Repo stars

  1. Efficient Long-Text Understanding with Short-Text Models. Maor Ivgi, Uri Shaham, Jonathan Berant. TACL 2023.

        GitHub Repo stars

  1. Parallel Context Windows for Large Language Models. Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham. ACL 2023.

        GitHub Repo stars

  1. Unlimiformer: Long-Range Transformers with Unlimited Length Input. Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley. Arxiv 2023.

        GitHub Repo stars

  1. Landmark Attention: Random-Access Infinite Context Length for Transformers. Amirkeivan Mohtashami, Martin Jaggi Arxiv 2023.

        GitHub Repo stars

  1. LONGNET: Scaling Transformers to 1,000,000,000 Tokens. Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei. Arxiv 2023.

        GitHub Repo stars

  1. Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.

        GitHub Repo stars

  1. Blockwise Parallel Transformer for Long Context Large Models. Hao Liu, Pieter Abbeel. Arxiv 2023.

        GitHub Repo stars

  1. MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis. Arxiv 2023.

        GitHub Repo stars

  1. Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers. Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci, Aurelien Lucchi, Thomas Hofmann. Arxiv 2023.

  2. Long-range Language Modeling with Self-retrieval. Ohad Rubin, Jonathan Berant. Arxiv 2023.

  3. Max-Margin Token Selection in Attention Mechanism. Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak. Arxiv 2023.

  4. Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers. Jiawen Xie, Pengyu Cheng, Xiao Liang, Yong Dai, Nan Du. Arxiv 2023.

  5. Sparse Token Transformer with Attention Back Tracking. Heejun Lee, Minki Kang, Youngwan Lee, Sung Ju Hwang. ICLR 2023.

  6. Empower Your Model with Longer and Better Context Comprehension. YiFei Gao, Lei Wang, Jun Fang, Longhua Hu, Jun Cheng. Arxiv 2023.

        GitHub Repo stars

  1. Ring Attention with Blockwise Transformers for Near-Infinite Context. Hao Liu, Matei Zaharia, Pieter Abbeel. Arxiv 2023.

  2. Efficient Streaming Language Models with Attention Sinks. Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis. Arxiv 2023.

        GitHub Repo stars

  1. HyperAttention: Long-context Attention in Near-Linear Time. Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, Amir Zandieh. Arxiv 2023.

  2. Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention. Ziwei He,Jian Yuan,Le Zhou,Jingwen Leng,Bo Jiang. Arxiv 2023.

        GitHub Repo stars

  1. ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition. Lu Ye, Ze Tao, Yong Huang, Yang Li. Arxiv 2024.

  2. Training-Free Long-Context Scaling of Large Language Models. Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong. Arxiv 2024.

        GitHub Repo stars

  1. LongHeads: Multi-Head Attention is Secretly a Long Context Processor. Yi Lu, Xin Zhou, Wei He, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang. Arxiv 2024.

  2. Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention. Kaiqiang Song, Xiaoyang Wang, Sangwoo Cho, Xiaoman Pan, Dong Yu. Arxiv 2023.

  3. SnapKV: LLM Knows What You are Looking for Before Generation. Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen. Arxiv 2024.

        GitHub Repo stars

  1. Sequence can Secretly Tell You What to Discard. Jincheng Dai, Zhuowei Huang, Haiyun Jiang, Chen Chen, Deng Cai, Wei Bi, Shuming Shi. Arxiv 2024.

2.2 Linear Attention

  1. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret. ICML 2020.

        GitHub Repo stars

  1. Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations. Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Ré. Arxiv 2019.

        GitHub Repo stars

  1. Masked language modeling for proteins via linearly scalable long-context transformers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.

  2. Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.

        GitHub Repo stars

  1. Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma. Arxiv 2020.

        GitHub Repo stars

  1. Random Feature Attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong. Arxiv 2021.

        GitHub Repo stars

  1. Luna: Linear unified nested attention. Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer. Arxiv 2021.

        GitHub Repo stars

  1. Fnet: Mixing tokens with fourier transforms. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. Arxiv 2021.

        GitHub Repo stars

  1. Gated Linear Attention Transformers with Hardware-Efficient Training. Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim. Arxiv 2023.

        GitHub Repo stars

  1. Latent Attention for Linear Time Transformers. Rares Dolga, Marius Cobzarenco, David Barber. Arxiv 2024.

  2. Simple linear attention language models balance the recall-throughput tradeoff. Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Ré. Arxiv 2024.

        GitHub Repo stars

  1. Linear Attention Sequence Parallelism. Weigao Sun, Zhen Qin, Dong Li, Xuyang Shen, Yu Qiao, Yiran Zhong. Arxiv 2024.

        GitHub Repo stars

  1. Softmax Attention with Constant Cost per Token. Franz A. Heinsen. Arxiv 2024.

        ![GitHub Repo stars](https://img.shields.io/github/stars/glassroom/heinsen_attention

  1. Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length. Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou. Arxiv 2024.

        ![GitHub Repo stars](https://img.shields.io/github/stars/XuezheMax/megalodon

2.3 Hierarchical Attention

  1. Neural Legal Judgment Prediction in English. Ilias Chalkidis, Ion Androutsopoulos, Nikolaos Aletras. ACL 2019.

        GitHub Repo stars

  1. Hierarchical Neural Network Approaches for Long Document Classification. Snehal Khandve, Vedangi Wagh, Apurva Wani, Isha Joshi, Raviraj Joshi. ICML 2022.

  2. Hi-transformer: Hierarchical interactive transformer for efficient and effective long document modeling. Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang. ACL-IJCNLP 2021

  3. Erniesparse: Learning hierarchical efficient transformer through regularized self-attention. Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. Arxiv 2022.

2.4 IO-Aware Attention

  1. Self-attention Does Not Need O(n^2) Memory. Markus N. Rabe, Charles Staats. Arxiv 2021.

  2. Faster Causal Attention Over Large Sequences Through Sparse Flash Attention. Matteo Pagliardini, Daniele Paliotta, Martin Jaggi, François Fleuret. Arxiv 2023.

  3. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré. Arxiv 2022.

        GitHub Repo stars

  1. FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. Tri Dao. Arxiv 2023.

        GitHub Repo stars

  1. Efficient Memory Management for Large Language Model Serving with PagedAttention. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica. Arxiv 2023.

        GitHub Repo stars

  1. TransNormerLLM: A Faster and Better Large Language Model with Improved TransNormer. Zhen Qin, Dong Li, Weigao Sun, Weixuan Sun, Xuyang Shen, Xiaodong Han, Yunshen Wei, Baohong Lv, Xiao Luo, Yu Qiao, Yiran Zhong. Arxiv 2023.

        GitHub Repo stars

  1. Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.

        GitHub Repo stars

  1. ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition. Lu Ye, Ze Tao, Yong Huang, Yang Li. Arxiv 2024.

  2. SnapKV: LLM Knows What You are Looking for Before Generation. Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen. Arxiv 2024.

        GitHub Repo stars

  1. Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao. ICLR 2024 Oral.

  2. Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference. Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant J. Nair, Ilya Soloveychik, Purushotham Kamath. Arxiv 2024.

  3. Efficient LLM Inference with Kcache. Qiaozhi He, Zhihua Wu. Arxiv 2024.

3. Recurrent Transformers

  1. Transformer-XL: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. ACL 2019.

        GitHub Repo stars

  1. Compressive Transformers for Long-Range Sequence Modelling. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap. Arxiv 2019.

        GitHub Repo stars

  1. Memformer: The memory-augmented transformer. Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu. Arxiv 2020.

        GitHub Repo stars

  1. ERNIE-Doc: A Retrospective Long-Document Modeling Transformer. SiYu Ding, Junyuan Shang, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. ACL-IJCNLP 2021.

  2. Memorizing Transformers. Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy. Arxiv 2022.

        GitHub Repo stars

  1. Recurrent Attention Networks for Long-text Modeling. Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, Qing Li. ACL 2023.

        GitHub Repo stars

  1. RWKV: Reinventing RNNs for the Transformer Era. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Xiangru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu. Arxiv 2023.

        GitHub Repo stars         GitHub Repo stars

  1. Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model. Yinghan Long, Sayeed Shafayet Chowdhury, Kaushik Roy. Arxiv 2023.

  2. Scaling Transformer to 1M tokens and beyond with RMT. Aydar Bulatov, Yuri Kuratov, Mikhail S. Burtsev. Arxiv 2023.

  3. Block-Recurrent Transformers. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, Behnam Neyshabur. Arxiv 2023.

        GitHub Repo stars

  1. TRAMS: Training-free Memory Selection for Long-range Language Modeling. Haofei Yu, Cunxiang Wang, Yue Zhang, Wei Bi. Arxiv 2023.

        GitHub Repo stars

  1. Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models. Soham De, Samuel L. Smith, Anushan Fernando, Aleksandar Botev, George Cristian-Muraru, Albert Gu, Ruba Haroun, Leonard Berrada, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, Arnaud Doucet, David Budden, Yee Whye Teh, Razvan Pascanu, Nando De Freitas, Caglar Gulcehre. Arxiv 2024.

  2. Extensible Embedding: A Flexible Multipler For LLM's Context Length. Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang. Arxiv 2024.

        GitHub Repo stars

  1. Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence. Bo Peng, Daniel Goldstein, Quentin Anthony, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Teddy Ferdinan, Haowen Hou, Przemysław Kazienko, Kranthi Kiran GV, Jan Kocoń, Bartłomiej Koptyra, Satyapriya Krishna, Ronald McClelland Jr., Niklas Muennighoff, Fares Obeid, Atsushi Saito, Guangyu Song, Haoqin Tu, Stanisław Woźniak, Ruichong Zhang, Bingchen Zhao, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu. Arxiv 2024.

        GitHub Repo stars         GitHub Repo stars         GitHub Repo stars

  1. Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention. Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal. Arxiv 2024.

  2. RecurrentGemma: Moving Past Transformers for Efficient Open Language Models. Aleksandar Botev, Soham De, Samuel L Smith, Anushan Fernando, George-Cristian Muraru, Ruba Haroun, Leonard Berrada, Razvan Pascanu, Pier Giuseppe Sessa, Robert Dadashi, Léonard Hussenot, Johan Ferret, Sertan Girgin, Olivier Bachem, Alek Andreev, Kathleen Kenealy, Thomas Mesnard, Cassidy Hardin, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Armand Joulin, Noah Fiedel, Evan Senter, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, David Budden, Arnaud Doucet, Sharad Vikram, Adam Paszke, Trevor Gale, Sebastian Borgeaud, Charlie Chen, Andy Brock, Antonia Paterson, Jenny Brennan, Meg Risdal, Raj Gundluru, Nesh Devanathan, Paul Mooney, Nilay Chauhan, Phil Culliton, Luiz GUStavo Martins, Elisa Bandy, David Huntsperger, Glenn Cameron, Arthur Zucker, Tris Warkentin, Ludovic Peran, Minh Giang, Zoubin Ghahramani, Clément Farabet, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, Yee Whye Teh, Nando de Frietas. Arxiv 2024.

4. State Space Models

  1. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. Albert Gu, Tri Dao. Arxiv 2023.

        GitHub Repo stars

  1. MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts. Maciej Pióro, Kamil Ciebiera, Krystian Król, Jan Ludziejewski, Sebastian Jaszczur. Arxiv 2024.

  2. MambaByte: Token-free Selective State Space Model. Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M Rush. Arxiv 2024.

  3. LOCOST: State-Space Models for Long Document Abstractive Summarization. Florian Le Bronnec, Song Duong, Mathieu Ravaut, Alexandre Allauzen, Nancy F. Chen, Vincent Guigue, Alberto Lumbreras, Laure Soulier, Patrick Gallinari. Arxiv 2024.

  4. State Space Models as Foundation Models: A Control Theoretic Overview. Carmen Amo Alonso, Jerome Sieber, Melanie N. Zeilinger. Arxiv 2024.

  5. Jamba: A Hybrid Transformer-Mamba Language Model. Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, Shaked Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, Omri Abend, Raz Alon, Tomer Asida, Amir Bergman, Roman Glozman, Michael Gokhman, Avashalom Manevich, Nir Ratner, Noam Rozen, Erez Shwartz, Mor Zusman, Yoav Shoham. Arxiv 2024.

  6. Robustifying State-space Models for Long Sequences via Approximate Diagonalization. Annan Yu, Arnur Nigmetov, Dmitriy Morozov, Michael W. Mahoney, N. Benjamin Erichson. ICLR 2024 Spotlight.

5. Length Extrapolation

  1. RoFormer: Enhanced Transformer with Rotary Position Embedding. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu. Arxiv 2021.

        GitHub Repo stars

  1. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Ofir Press, Noah A. Smith, Mike Lewis. ICLR 2022.

        GitHub Repo stars

  1. KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation. Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, Alexander I. Rudnicky. Arxiv 2022.

  2. Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis. Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky, Peter J. Ramadge. ACL 2023.

  3. A Length-Extrapolatable Transformer. Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, Furu Wei. ACL 2023.

        GitHub Repo stars

  1. Randomized Positional Encodings Boost Length Generalization of Transformers. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, Joel Veness. ACL 2023.

        GitHub Repo stars

  1. The Impact of Positional Encoding on Length Generalization in Transformers. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy. Arxiv 2023.

        GitHub Repo stars

  1. Focused Transformer: Contrastive Training for Context Scaling. Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, Piotr Miłoś. Arxiv 2023.

        GitHub Repo stars

  1. Extending Context Window of Large Language Models via Positional Interpolation. Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian. Arxiv 2023.

  2. Exploring Transformer Extrapolation. Zhen Qin, Yiran Zhong, Hui Deng. Arxiv 2023.

        GitHub Repo stars

  1. LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models. Chi Han, Qifan Wang, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang. Arxiv 2023.

        GitHub Repo stars

  1. YaRN: Efficient Context Window Extension of Large Language Models. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole. Arxiv 2023.

        GitHub Repo stars

  1. PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training. Dawei Zhu,Nan Yang,Liang Wang,Yifan Song,Wenhao Wu,Furu Wei,Sujian Li. Arxiv 2023.

        GitHub Repo stars

  1. LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models. Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia. ICLR 2024 Oral.

        GitHub Repo stars

  1. Scaling Laws of RoPE-based Extrapolation. Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, Dahua Lin. Arxiv 2023.

  2. Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation. Ta-Chung Chi,Ting-Han Fan,Alexander I. Rudnicky. Arxiv 2023.

        GitHub Repo stars

  1. CoCA: Fusing position embedding with Collinear Constrained Attention for fine-tuning free context window extending. Shiyi Zhu, Jing Ye, Wei Jiang, Qi Zhang, Yifan Wu, Jianguo Li. Arxiv 2023.

        GitHub Repo stars

  1. Structured Packing in LLM Training Improves Long Context Utilization. Konrad Staniszewski, Szymon Tworkowski, Sebastian Jaszczur, Henryk Michalewski, Łukasz Kuciński, Piotr Miłoś. Arxiv 2024.

  2. LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning. Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu. Arxiv 2024.

  3. Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache. Bin Lin, Tao Peng, Chen Zhang, Minmin Sun, Lanbo Li, Hanyu Zhao, Wencong Xiao, Qi Xu, Xiafei Qiu, Shen Li, Zhigang Ji, Yong Li, Wei Lin. Arxiv 2024.

  4. Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.

        GitHub Repo stars

  1. Extending LLMs' Context Window with 100 Samples. Yikai Zhang, Junlong Li, Pengfei Liu. Arxiv 2024.

        GitHub Repo stars

  1. E^2-LLM: Efficient and Extreme Length Extension of Large Language Models. Jiaheng Liu, Zhiqi Bai, Yuanxing Zhang, Chenchen Zhang, Yu Zhang, Ge Zhang, Jiakai Wang, Haoran Que, Yukang Chen, Wenbo Su, Tiezheng Ge, Jie Fu, Wenhu Chen, Bo Zheng. Arxiv 2024.

  2. With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation. Y. Wang, D. Ma, D. Cai. Arxiv 2024.

        GitHub Repo stars

  1. Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation. Zhenyu He, Guhao Feng, Shengjie Luo, Kai Yang, Di He, Jingjing Xu, Zhi Zhang, Hongxia Yang, Liwei Wang. Arxiv 2024.

  2. Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens. Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi. Arxiv 2024.

        GitHub Repo stars

  1. LongRoPE: Extending LLM ContextWindow Beyond 2 Million Tokens. Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang. Arxiv 2024.

  2. Data Engineering for Scaling Language Models to 128K Context. Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, Hao Peng. Arxiv 2024.

        GitHub Repo stars

  1. Transformers Can Achieve Length Generalization But Not Robustly. Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, Denny Zhou. Arxiv 2024.

  2. Long-Context Language Modeling with Parallel Context Encoding. Howard Yen, Tianyu Gao, Danqi Chen. Arxiv 2024.

        GitHub Repo stars

  1. CLEX: Continuous Length Extrapolation for Large Language Models. Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, Lidong Bing. Arxiv 2023.

        GitHub Repo stars

  1. Resonance RoPE: Improving Context Length Generalization of Large Language Models. Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu. Arxiv 2024.

        GitHub Repo stars

  1. Can't Remember Details in Long Documents? You Need Some R&R. Devanshu Agrawal, Shang Gao, Martin Gajek. Arxiv 2024.

        GitHub Repo stars

  1. Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang. Arxiv 2024.

        GitHub Repo stars

  1. InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, Maosong Sun. Arxiv 2024.

  2. Naive Bayes-based Context Extension for Large Language Models. Jianlin Su, Murtadha Ahmed, Wenbo, Luo Ao, Mingren Zhu, Yunfeng Liu. Arxiv 2024.

        GitHub Repo stars

  1. Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference. Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant J. Nair, Ilya Soloveychik, Purushotham Kamath. Arxiv 2024.

  2. In-Context Pretraining: Language Modeling Beyond Document Boundaries. Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis. ICLR 2024 Spotlight.

        GitHub Repo stars

  1. Effective Long-Context Scaling of Foundation Models. Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma. Arxiv 2023.

  2. Fewer Truncations Improve Language Modeling. Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto. Arxiv 2024.

  3. Length Generalization of Causal Transformers without Position Encoding. Jie Wang, Tao Ji, Yuanbin Wu, Hang Yan, Tao Gui, Qi Zhang, Xuanjing Huang, Xiaoling Wang. Arxiv 2024.

        GitHub Repo stars

  1. Extending Llama-3's Context Ten-Fold Overnight. Peitian Zhang, Ninglu Shao, Zheng Liu, Shitao Xiao, Hongjin Qian, Qiwei Ye, Zhicheng Dou. Arxiv 2024.

        GitHub Repo stars

6. Long Term Memory

  1. Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System. Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li. Arxiv 2023.

        GitHub Repo stars

  1. MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang. Arxiv 2023.

        GitHub Repo stars

  1. Improve Long-term Memory Learning Through Rescaling the Error Temporally. Shida Wang, Zhanglu Yan. Arxiv 2023.

  2. Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models. Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo. Arxiv 2023.

  3. Empowering Working Memory for Large Language Model Agents. Jing Guo, Nan Li, Jianchuan Qi, Hang Yang, Ruiqiao Li, Yuzhen Feng, Si Zhang, Ming Xu. Arxiv 2024.

  4. Evolving Large Language Model Assistant with Long-Term Conditional Memory. Ruifeng Yuan, Shichao Sun, Zili Wang, Ziqiang Cao, Wenjie Li. Arxiv 2024.

  5. Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement. Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo. Arxiv 2024.

  6. A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer. Arxiv 2024.

  7. Steering Conversational Large Language Models for Long Emotional Support Conversations. Navid Madani, Sougata Saha, Rohini Srihari. Arxiv 2024.

  8. SPAR: Personalized Content-Based Recommendation via Long Engagement Attention. Chiyu Zhang, Yifei Sun, Jun Chen, Jie Lei, Muhammad Abdul-Mageed, Sinong Wang, Rong Jin, Sem Park, Ning Yao, Bo Long. Arxiv 2024.

  9. Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations. Nuo Chen, Hongguang Li, Juhua Huang, Baoyuan Wang, Jia Li. Arxiv 2024.

        GitHub Repo stars

  1. StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses. Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan. Arxiv 2024.

  2. Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization. Tobias Schnabel, Jennifer Neville. Arxiv 2024.

        GitHub Repo stars

7. RAG

  1. Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading. Howard Chen, Ramakanth Pasunuru, Jason Weston, Asli Celikyilmaz. Arxiv 2023.

  2. Attendre: Wait To Attend By Retrieval With Evicted Queries in Memory-Based Transformers for Long Context Processing. Zi Yang, Nan Hua. Arxiv 2024.

  3. BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models. Kun Luo, Zheng Liu, Shitao Xiao, Kang Liu. Arxiv 2024.

        GitHub Repo stars

  1. Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity. Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park. Arxiv 2024.

        GitHub Repo stars

  1. RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation. Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, Jie Fu. Arxiv 2024.

        GitHub Repo stars

  1. Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts. Zhuo Chen, Xinyu Wang, Yong Jiang, Pengjun Xie, Fei Huang, Kewei Tu. Arxiv 2024.

  2. Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation. Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi. Arxiv 2024.

  3. Multi-view Content-aware Indexing for Long Document Retrieval. Kuicai Dong, Derrick Goh Xin Deik, Yi Quan Lee, Hao Zhang, Xiangyang Li, Cong Zhang, Yong Liu. Arxiv 2024.

  4. Retrieval Head Mechanistically Explains Long-Context Factuality. Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao Peng, Yao Fu. Arxiv 2024.

        GitHub Repo stars

8. Agent

  1. LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration. Jun Zhao, Can Zu, Hao Xu, Yi Lu, Wei He, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang. Arxiv 2024.

  2. A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis. Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust. ICLR 2024 Oral.

  3. PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents. Simeng Sun, Yang Liu, Shuohang Wang, Dan Iter, Chenguang Zhu, Mohit Iyyer. EACL 2024.

        GitHub Repo stars

  1. AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents. Jake Grigsby, Linxi Fan, Yuke Zhu. ICLR 2024 Spotlight.

        GitHub Repo stars Static Badge

9. Compress

  1. Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.

        GitHub Repo stars

  1. Compressing Context to Enhance Inference Efficiency of Large Language Models. Yucheng Li, Bo Dong, Chenghua Lin, Frank Guerin. Arxiv 2023.

        GitHub Repo stars

  1. LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models. Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.

        GitHub Repo stars

  1. LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression. Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.

        GitHub Repo stars

  1. System 2 Attention (is something you might need too). Jason Weston, Sainbayar Sukhbaatar. Arxiv 2023.

  2. DSFormer: Effective Compression of Text-Transformers by Dense-Sparse Weight Factorization. Rahul Chand, Yashoteja Prabhu, Pratyush Kumar. Arxiv 2023.

  3. Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon. Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou. Arxiv 2024.

        GitHub Repo stars

  1. Flexibly Scaling Large Language Models Contexts Through Extensible Tokenization. Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang. Arxiv 2024.

        GitHub Repo stars

  1. Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression. Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yukun Yan, Shuo Wang, Ge Yu. Arxiv 2024.

        GitHub Repo stars

  1. Learning to Compress Prompt in Natural Language Formats. Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu. Arxiv 2024.

  2. Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference. Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti. Arxiv 2024.

  3. LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression. Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Dongmei Zhang. Arxiv 2024.

        GitHub Repo stars

  1. PCToolkit: A Unified Plug-and-Play Prompt Compression Toolkit of Large Language Models. Jinyi Li, Yihuai Lan, Lei Wang, Hao Wang. Arxiv 2024.

        GitHub Repo stars

  1. Compressed Context Memory for Online Language Model Interaction. Jang-Hyun Kim, Junyoung Yeom, Sangdoo Yun, Hyun Oh Song. ICLR 2024.

        GitHub Repo stars

  1. Compressing Large Language Models by Streamlining the Unimportant Layer. Xiaodong Chen, Yuxuan Hu, Jing Zhang. Arxiv 2024.

  2. PROMPT-SAW: Leveraging Relation-Aware Graphs for Textual Prompt Compression. Muhammad Asif Ali, Zhengping Li, Shu Yang, Keyuan Cheng, Yang Cao, Tianhao Huang, Lijie Hu, Lu Yu, Di Wang. Arxiv 2024.

  3. Training LLMs over Neurally Compressed Text. Brian Lester, Jaehoon Lee, Alex Alemi, Jeffrey Pennington, Adam Roberts, Jascha Sohl-Dickstein, Noah Constant. Arxiv 2024.

  4. Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models. Taiqiang Wu, Chaofan Tao, Jiahao Wang, Zhe Zhao, Ngai Wong. Arxiv 2024.

  5. Adapting LLMs for Efficient Context Processing through Soft Prompt Compression. Cangqing Wang, Yutian Yang, Ruisi Li, Dan Sun, Ruicong Cai, Yuzhu Zhang, Chengqian Fu, Lillian Floyd. Arxiv 2024.

  6. Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao. ICLR 2024 Oral.

  7. LLoCO: Learning Long Contexts Offline. Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E. Gonzalez, Raluca Ada Popa. Arxiv 2024.

        GitHub Repo stars

  1. In-Context Learning State Vector with Inner and Momentum Optimization. Dongfang Li, Zhenyu Liu, Xinshuo Hu, Zetian Sun, Baotian Hu, Min Zhang. Arxiv 2024.

        GitHub Repo stars

10. Benchmark and Evaluation

10.1 LLM

  1. Long Range Arena : A Benchmark for Efficient Transformers. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler. ICLR 2021.

        GitHub Repo stars

  1. LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation. Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, Minlie Huang. TACL 2022.

        GitHub Repo stars

  1. SCROLLS: Standardized CompaRison Over Long Language Sequences. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy. EMNLP 2022.

        GitHub Repo stars

  1. MuLD: The Multitask Long Document Benchmark. George Hudson, Noura Al Moubayed. LREC 2022.

        GitHub Repo stars

  1. Lost in the Middle: How Language Models Use Long Contexts. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. Arxiv 2023.

        GitHub Repo stars

  1. L-Eval: Instituting Standardized Evaluation for Long Context Language Models. Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu. Arxiv 2023.

        GitHub Repo stars

  1. LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. Arxiv 2023.

        GitHub Repo stars

  1. Content Reduction, Surprisal and Information Density Estimation for Long Documents. Shaoxiong Ji, Wei Sun, Pekka Marttinen. Arxiv 2023.

  2. BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models. Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen. Arxiv 2023.

        GitHub Repo stars

  1. Retrieval meets Long Context Large Language Models. Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2023.

  2. LooGLE: Long Context Evaluation for Long-Context Language Models. Jiaqi Li, Mengmeng Wang, Zilong Zheng, Muhan Zhang. Arxiv 2023.

        GitHub Repo stars

  1. The Impact of Reasoning Step Length on Large Language Models. Mingyu Jin, Qinkai Yu, Dong shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du. Arxiv 2024.

  2. DocFinQA: A Long-Context Financial Reasoning Dataset. Varshini Reddy, Rik Koncel-Kedziorski, Viet Dac Lai, Chris Tanner. Arxiv 2024.

  3. LongFin: A Multimodal Document Understanding Model for Long Financial Domain Documents. Ahmed Masry, Amir Hajian. Arxiv 2024.

  4. PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models. Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, Linqi Song. Arxiv 2024.

  5. LongHealth: A Question Answering Benchmark with Long Clinical Documents. Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL. Aerts, Jakob Nikolas Kather, Daniel Truhn, Keno Bressem. Arxiv 2024.

  6. Long-form evaluation of model editing. Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad. Arxiv 2024.

  7. In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss. Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev. Arxiv 2024.

        GitHub Repo stars

  1. ∞Bench: Extending Long Context Evaluation Beyond 100K Tokens. Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Khai Hao, Xu Han, Zhen Leng Thai, Shuo Wang, Zhiyuan Liu, Maosong Sun. Arxiv 2024.

  2. Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models. Mosh Levy, Alon Jacoby, Yoav Goldberg. Arxiv 2024.

        GitHub Repo stars

  1. Evaluating Very Long-Term Conversational Memory of LLM Agents. Adyasha Maharana, Dong-Ho Lee, Sergey Tulyakov, Mohit Bansal, Francesco Barbieri, Yuwei Fang. Arxiv 2024.

        GitHub Repo stars

  1. Language Models as Science Tutors. Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, Sebastian Mizera, Toni Annala, Max Jameson Aragon, Arturo Rodríguez Fanlo, Simon Frieder, Simon Machado, Akshara Prabhakar, Ellie Thieu, Jiachen T. Wang, Zirui Wang, Xindi Wu, Mengzhou Xia, Wenhan Jia, Jiatong Yu, Jun-Jie Zhu, Zhiyong Jason Ren, Sanjeev Arora, Danqi Chen. Arxiv 2024.

        GitHub Repo stars

  1. Needle in a haystack - pressure testing llms. Kamradt, G. Github 2024.

        GitHub Repo stars

  1. In Search of Needles in a 11M Haystack: Recurrent Memory Finds What LLMs Miss. Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev. Arxiv 2024.

        GitHub Repo stars

  1. LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K. Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, Guohao Dai, Shengen Yan, Yu Wang. Arxiv 2024.

        GitHub Repo stars

  1. Counting-Stars: A Simple, Efficient, and Reasonable Strategy for Evaluating Long-Context Large Language Models. Mingyang Song, Mao Zheng, Xuan Luo. Arxiv 2024.

        GitHub Repo stars

  1. NovelQA: A Benchmark for Long-Range Novel Question Answering. Cunxiang Wang, Ruoxi Ning, Boqi Pan, Tonghui Wu, Qipeng Guo, Cheng Deng, Guangsheng Bao, Qian Wang, Yue Zhang. Arxiv 2024.

        GitHub Repo stars

  1. Long-form factuality in large language models. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, Quoc V. Le. Arxiv 2024.

        GitHub Repo stars

  1. LUQ: Long-text Uncertainty Quantification for LLMs. JCaiqi Zhang, Fangyu Liu, Marco Basaldella, Nigel Collier. Arxiv 2024.

  2. CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models. Zexuan Qiu, Jingjing Li, Shijue Huang, Wanjun Zhong, Irwin King. Arxiv 2024.

        GitHub Repo stars

  1. Long-context LLMs Struggle with Long In-context Learning. Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, Wenhu Chen. Arxiv 2024.

        GitHub Repo stars

  1. CLAPNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems. Sara Rosenthal, Avirup Sil, Radu Florian, Salim Roukos. Arxiv 2024.

        GitHub Repo stars

  1. XL2Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies. Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Piji Li. Arxiv 2024.

        GitHub Repo stars

  1. Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors. Ido Amos, Jonathan Berant, Ankit Gupta. ICLR 2024 Oral.

  2. Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks. Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, Kai Chen. Arxiv 2024.

        GitHub Repo stars

  1. RULER: What's the Real Context Size of Your Long-Context Language Models?. Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, Boris Ginsburg. Arxiv 2024.

        GitHub Repo stars

  1. LongEmbed: Extending Embedding Models for Long Context Retrieval. Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li. Arxiv 2024.

        GitHub Repo stars

  1. Make Your LLM Fully Utilize the Context. Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou. Arxiv 2024.

        GitHub Repo stars

  1. S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models. Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu. NAACL 2024.

        GitHub Repo stars

  1. In-Context Learning with Long-Context Models: An In-Depth Exploration. Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R. Gormley, Graham Neubig. Arxiv 2024.

        GitHub Repo stars

  1. Many-shot Jailbreaking. Anthropic 2024.

10.2 MLLM

  1. MileBench: Benchmarking MLLMs in Long Context. Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, Benyou Wang. Arxiv 2024.

        GitHub Repo stars

11. Blogs

  1. Extending Context is Hard…but not Impossible†. kaiokendev. 2023.

  2. NTK-Aware Scaled RoPE. u/bloc97 . 2023.

  3. The Secret Sauce behind 100K context window in LLMs: all tricks in one place. Galina Alperovich. 2023.

  4. Transformer升级之路:7、长度外推性与局部注意力. 苏剑林(Jianlin Su). 2023.

  5. Transformer升级之路:9、一种全局长度外推的新思路. 苏剑林(Jianlin Su). 2023.

  6. Transformer升级之路:12、无限外推的ReRoPE. 苏剑林(Jianlin Su). 2023.

  7. Transformer升级之路:14、当HWFA遇见ReRoPE. 苏剑林(Jianlin Su). 2023.

  8. Transformer升级之路:15、Key归一化助力长度外推. 苏剑林(Jianlin Su). 2023.

  9. Transformer升级之路:16、“复盘”长度外推技术. 苏剑林(Jianlin Su). 2024.

  10. Introducing RAG 2.0. Contextual AI Team. 2024.

  11. How Do Language Models put Attention Weights over Long Context?. Yao Fu. 2024.

  12. An open-source and open-access RAG platform. Yunfan Gao. 2024.

  13. Many-shot Jailbreaking. Anthropic. 2024.

Acknowledgements

Please contact me if I miss your names in the list, I will add you back ASAP!

awesome-llm-long-context-modeling's People

Contributors

xjywhu avatar xnhyacinth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-llm-long-context-modeling's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.