GPT-3.5-ON-STEROIDS combines GPT with Python tools, empowering dynamic web scraping, language processing, and data retrieval. Contribute to advancing text generation with AI. 🚀
For managing long term memory of the agent. Integration of a vector database is required.
Currently we write it to a text file which agent can refer to by reading it. Which only helps the agent temporarily during the specific task. Reference
Use Streamlit file_uploader and analyze file formats such as pdf, word and excel.
You can use python modules like PyPDF2, python-docx and openpyxl and show a visual summary of the data contained.
Hallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. It may assume file paths and commands that do not exist. So we need better error handling mechanism for this.
Is your feature request related to a problem? Please describe.
The repository contains code in python, it does not have workflow for code scanning.
Describe the solution you'd like
I want to add the codeql workflow to automate security checks. CodeQL is the code analysis engine developed by GitHub to identify vulnerabilities in code. It will analyze your code and display the results as code scanning alerts. It will be enabled on every push, commit and pull request using GitHub actions.
Right now, the app just displays the "thoughts" of GPT-3.5. When the user asks questions, it does not give them the final output after completion. I think it might be beneficial for users if the program lets them know that these are just thoughts and not the actual answer their question.
I've thought of two solutions to this problem.
We can go the "Bing" way by showing the user text like "Searching on google", "Searching Wikipedia", "Writing to file", "Reading from file" instead of showing the entire thought process of GPT. After it completes, we can show the final output to the user.
We can hide the thoughts entirely and just show a loading indicator. After it completes, we can show the final output to the user.
As in our Issue #11 , We added the ability to read various data formats. Now we need to create tools to represent the analysis to user in a visual way by using charts etc. The type of visualization tool can be decided by the agent. Function should be there to plot it.
Our agent is capable of reading/writing python scripts but it lacks the capability to execute them on the server. So create a function execute_python_file (filepath,requried_data):
Python OCR is a technology that recognizes and pulls out text in images like scanned documents and photos using Python. It can be completed using the open-source OCR engine Tesseract. By integrating it our agent will be able to analyze and understand images better which will increase it's overall intelligence. Reference
In the feature of file analysis. We are using PandasAI for the analysis of Excel files. So plots are not appearing in the streamlit frontend even after using the StreamlitMiddleware of pandasai.
As getting OpenAI key is pain because it expires every 3 months and getting a new one needs a new phone number for verification. So adding option for an open source LLM will be great. Allow user to choose their LLM from a dropdown in Streamlit.
This feature will let users get a summary of most YouTube videos having English subtitles. The program will first fetch the subtitles of the video mentioned by the user, and then, GPT will summarise and show it to the user.
This issue addresses the need for a GitHub Actions workflow that automatically assigns reviewers to pull requests upon their opening. Automating the reviewer assignment process will streamline code review processes and ensure timely feedback from team members.
Proposed Feature:
The proposed feature involves the creation of a GitHub Actions workflow that triggers upon the opening of a pull request. This workflow will automatically assign designated reviewers based on predefined list of reviewers in the workflow config.
Expected Behavior:
When a pull request is opened, the workflow should trigger automatically.
Based on defined rules or configurations, reviewers will be assigned to the pull request.
Reviewer assignments should be customizable and adaptable to project-specific requirements.
Benefits:
Streamlines the code review process by automating reviewer assignments.
Ensures timely feedback and reduces bottlenecks in the development workflow.
Improves collaboration and accountability among team members.