Paper Insights uses the Retrieval Augmented Generation (RAG) capabilities of LlamaIndex to answer questions about Papers documents. This project created based on SEC Insigt
You can also check out SES insight End-to-End tutorial guide on YouTube for similar project! This video covers product features, system architecture, development environment setup, and how to use this application with your own custom documents (beyond just Paper filings!). The video has chapters so you can skip to the papertion most relevant to you.
I created the project "Paper Insights" and made it public on GitHub due to a deeply personal motivationβmy son's chronic kidney issue, nephrotic syndrome. The root cause of this disease remains elusive, which led me to delve into extensive research and study of medical papers. I quickly realized the immense challenge of navigating through complex scientific literature. To address this, I developed "Paper Insights," a tool designed to aid in reading and analyzing scientific papers more effectively. This tool allows users to ask questions, gain insights from multiple documents at once, and accurately reference the original sources of any conclusions or logical reasoning derived. My goal in creating this project was to empower myself and others who are seeking to understand and possibly find cures for diseases that affect our loved ones.
- Chat-based Document Q&A against a pool of documents
- Citation of source data that LLM response was based on
- PDF Viewer with highlighting of citations
- Use of API-based tools (polygon.io) for answering quantitative questions
- Token-level streaming of LLM responses via Server-Sent Events
- Streaming of Reasoning Steps (Sub-Questions) within Chat
- Infrastructure-as-code for deploying directly to Vercel & Render
- Continuous deployments provided by Vercel & Render.com. Shipping changes is as easy as merging into your
main
branch. - Production & Preview environments for both Frontend & Backend deployments! Easily try your changes before release.
- Robust local environment setup making use of LocalStack & Docker compose
- Monitoring & Profiling provided by Sentry
- Load Testing provided by Loader.io
- Variety of python scripts for REPL-based chat & data management
- Frontend
- Backend
- FastAPI
- Docker
- SQLAlchemy
- OpenAI (gpt-3.5-turbo + text-embedding-ada-002)
- PGVector
- LlamaIndex π¦
- Infrastructure
- Render.com
- Backend hosting
- Postgres 15
- Vercel
- Frontend Hosting
- AWS
- Render.com
See README.md
files in frontend/
& backend/
folders for individual setup instructions for each. As mentioned above, we also have a YouTube tutorial here that covers how to setup this project's development environment.
We've also included a config for a GitHub Codespace in .devcontainer/devcontainer.json
. If you choose to use GitHub Codespaces, your codespace will come pre-configured with a lot of the libraries and system dependencies that are needed to run this project. This is probably the fastest way to get this project up and running! Having said that, developers have successfully set-up this project in Linux, macOS, and Windows environments!
If you have any questions when trying to run this project, you may find your answer quickly by reviewing our FAQ or by searching through our GitHub issues! If you don't see a satisfactory answer to your question, feel free to open a GitHub issue so we may assist you!
- The frontend currently doesn't support Mobile
We remain very open to contributions! We're looking forward to seeing the ideas and improvements.