Compressing Documents to Augment Language Models More Efficiently
Recent years have witnessed the enormous growth of language models like GPT-3, with hundreds of billions of parameters.
Recent years have witnessed the enormous growth of language models like GPT-3, with hundreds of billions of parameters.
One common technique to enhance their performance is retrieval augmentation – prepending relevant documents to provide more context. However, this adds significant overhead.
Researchers from the University of Washington and Allen Institu…
Keep reading with a 7-day free trial
Subscribe to AIDecide to keep reading this post and get 7 days of free access to the full post archives.