Giter Site home page Giter Site logo

farzad-r / llm-zero-to-hundred Goto Github PK

View Code? Open in Web Editor NEW
228.0 228.0 120.0 73.76 MB

This repository contains different LLM chatbot projects (RAG, LLM agents, etc.) and well-known techniques for training and fine tuning LLMs.

Jupyter Notebook 83.97% Python 15.98% Shell 0.05%

llm-zero-to-hundred's Introduction

Welcome to my GitHub!

I love to design and train Deep Learning models and develop systems with LLM agents to enhance task efficiency. In addition, I work on open-source projects and explain them in my YouTube channel in my spare time.

farzad_rzt farzad farzad

Website

llm-zero-to-hundred's People

Contributors

farzad-r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llm-zero-to-hundred's Issues

'latin-1' codec can't encode character

Hello,

When chatting with the bot, I often encounter this error:

File "C:\Repos\AI_project\Demo\demo_2024_05_02\RAG_GPT_OpenAI\src\utils\chatbot.py", line 60, in respond
retrieved_content = ChatBot.clean_references(docs)

File "C:\Repos\AI_project\Demo\demo_2024_05_02\RAG_GPT_OpenAI\src\utils\chatbot.py", line 117, in clean_references
content = content.encode('latin1').decode('utf-8', 'ignore')

UnicodeEncodeError: 'latin-1' codec can't encode character '\uf0b7' in position 383: ordinal not in range(256)

Do you happen to know how to solve this issue?

API Deployment resource does not exist

Hi Farzad. Hope you are doing good.
I'm encountering an issue while running the app.
I have created a .env file like this:
OPENAI_API_TYPE="azure"
OPENAI_API_BASE="https://hidden.openai.azure.com/"
OPENAI_API_VERSION="2024-02-15-preview"
OPENAI_API_KEY="hidden"
DEPLOYMENT_NAME_1="hidden-35turbo"
DEPLOYMENT_NAME_2="hidden-ada002embeddings"

I have made these following changes to load_config.py, I have just altered the def load_openai_cfg function:
def load_openai_cfg(self):
"""
Load OpenAI configuration settings.

    This function sets the OpenAI API configuration settings, including the API type, base URL,
    version, and API key. It is intended to be called at the beginning of the script or application
    to configure OpenAI settings.

    Note:
    Replace "Your API TYPE," "Your API BASE," "Your API VERSION," and "Your API KEY" with your actual
    OpenAI API credentials.
    """
    openai.api_type = os.getenv("OPENAI_API_TYPE")
    openai.api_base = os.getenv("OPENAI_API_BASE")
    openai.api_version = os.getenv("OPENAI_API_VERSION")
    openai.api_key = os.getenv("OPENAI_API_KEY")
    deployment_name_1 = os.getenv("DEPLOYMENT_NAME_1")
    deployment_name_2 = os.getenv("DEPLOYMENT_NAME_2")
    
    
    When i run the upload_data_manually.py file, it creates the vector database and it chunks the documents and then throws this error:
     PS C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT> python src/upload_data_manually.py

The directory 'C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\data\vectordb\uploaded\chroma' does not exist.
Loading documents manually...
Number of loaded documents: 4
Number of pages: 86

Chunking documents...
Number of chunks: 351

Preparing vectordb...
Traceback (most recent call last):
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\src\upload_data_manually.py", line 40, in
upload_data_manually()
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\src\upload_data_manually.py", line 33, in upload_data_manually
prepare_vectordb_instance.prepare_and_save_vectordb()
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\src\prepare_vectordb.py", line 110, in prepare_and_save_vectordb
vectordb = Chroma.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 778, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 736, in from_texts
chroma_collection.add_texts(
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 275, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\embeddings\openai.py", line 662, in embed_documents
return self.get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\embeddings\openai.py", line 488, in get_len_safe_embeddings
response = embed_with_retry(
^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\embeddings\openai.py", line 123, in embed_with_retry
return embed_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\tenacity_init
.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\tenacity_init
.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\tenacity_init
.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\ranaa\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self.exception
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\tenacity_init
.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\langchain_community\embeddings\openai.py", line 120, in _embed_with_retry
response = embeddings.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\Users\ranaa\Desktop\LLM-Zero-to-Hundred-master\RAG-GPT\venv\Lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

Both gpt3.5 turbo and adaembeddings 002 models are deployed on AzureOpen AI Studio and I have tested them in the playground to make sure they are working. I think this has something to do with the configurations in the .env file, maybe i need to pass deployment names in some other manner. Can you please tell me the format how you passed the .env file and any more changes you made in load_config file or app_config?
Please respond asap.

Conflicting requirements

Hello! I was unable to install the requirements. I tried both via requirements.txt and the manual installation process as instructed in WebRAGQuery. Using the latter method, I'm getting the following error:

ERROR: Cannot install chainlit==0.7.700 and duckduckgo-search==3.9.6 because these package versions have conflicting dependencies.

The conflict is caused by:
chainlit 0.7.700 depends on httpx<0.25.0 and >=0.23.0
duckduckgo-search 3.9.6 depends on httpx>=0.25.1

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Are there updated versions of these packages in your environment?

Thanks

ERROR: Could not find a version that satisfies the requirement bzip2==1.0.8 (from versions: none) ERROR: No matching distribution found for bzip2==1.0.8

Ubuntu wsl
Windows 11

Collecting build==1.0.3 (from -r requirements.txt (line 19))
Downloading build-1.0.3-py3-none-any.whl.metadata (4.2 kB)
ERROR: Could not find a version that satisfies the requirement bzip2==1.0.8=he774522_0 (from versions: none)
ERROR: No matching distribution found for bzip2==1.0.8=he774522_0
(projectenv) user@Nathan:~/LLM-Zero-to-Hundred$ pip install -r requirements.txt

(projectenv) user@Nathan:~/LLM-Zero-to-Hundred$ conda install anaconda::bzip2
Channels:

  • defaults
  • anaconda
    Platform: linux-64
    Collecting package metadata (repodata.json): done
    Solving environment: done

Package Plan

environment location: /home/user/miniconda3/envs/projectenv

added / updated specs:
- anaconda::bzip2

The following packages will be downloaded:

package                    |            build
---------------------------|-----------------
bzip2-1.0.8                |       h7b6447c_0         105 KB  anaconda
------------------------------------------------------------
                                       Total:         105 KB

The following packages will be SUPERSEDED by a higher-priority channel:

bzip2 pkgs/main::bzip2-1.0.8-h5eee18b_5 --> anaconda::bzip2-1.0.8-h7b6447c_0

Proceed ([y]/n)? y

Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(projectenv) user@Nathan:~/LLM-Zero-to-Hundred$ pip install -r requirements.txt
Collecting aiofiles==23.2.1 (from -r requirements.txt (line 4))
Using cached aiofiles-23.2.1-py3-none-any.whl.metadata (9.7 kB)
Collecting aiohttp==3.9.1 (from -r requirements.txt (line 5))
Using cached aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.4 kB)
Collecting aiosignal==1.3.1 (from -r requirements.txt (line 6))
Using cached aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting altair==5.2.0 (from -r requirements.txt (line 7))
Using cached altair-5.2.0-py3-none-any.whl.metadata (8.7 kB)
Collecting annotated-types==0.6.0 (from -r requirements.txt (line 8))
Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting anyio==3.7.1 (from -r requirements.txt (line 9))
Using cached anyio-3.7.1-py3-none-any.whl.metadata (4.7 kB)
Collecting asgiref==3.7.2 (from -r requirements.txt (line 10))
Using cached asgiref-3.7.2-py3-none-any.whl.metadata (9.2 kB)
Collecting asyncer==0.0.2 (from -r requirements.txt (line 11))
Using cached asyncer-0.0.2-py3-none-any.whl.metadata (6.8 kB)
Collecting attrs==23.2.0 (from -r requirements.txt (line 12))
Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting backoff==2.2.1 (from -r requirements.txt (line 13))
Using cached backoff-2.2.1-py3-none-any.whl.metadata (14 kB)
Collecting bcrypt==4.1.2 (from -r requirements.txt (line 14))
Using cached bcrypt-4.1.2-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (9.5 kB)
Collecting beautifulsoup4==4.12.2 (from -r requirements.txt (line 15))
Using cached beautifulsoup4-4.12.2-py3-none-any.whl.metadata (3.6 kB)
Collecting bidict==0.22.1 (from -r requirements.txt (line 16))
Using cached bidict-0.22.1-py3-none-any.whl.metadata (10 kB)
Collecting blinker==1.7.0 (from -r requirements.txt (line 17))
Using cached blinker-1.7.0-py3-none-any.whl.metadata (1.9 kB)
Collecting bs4==0.0.1 (from -r requirements.txt (line 18))
Using cached bs4-0.0.1.tar.gz (1.1 kB)
Preparing metadata (setup.py) ... done
Collecting build==1.0.3 (from -r requirements.txt (line 19))
Using cached build-1.0.3-py3-none-any.whl.metadata (4.2 kB)
ERROR: Could not find a version that satisfies the requirement bzip2==1.0.8 (from versions: none)
ERROR: No matching distribution found for bzip2==1.0.8

Resolution
(projectenv) user@Nathan:~/LLM-Zero-to-Hundred$ sudo apt-get install libbz2-dev

Any Idea how to create Docker file for RAG-GPT Application

Hi Farzad,

I just wanted to know how to create docker image for RAG-GPT Application. I tried creating image but it gives error. Any Idea how to deploy this app on Azure?

'''
Use Python runtime as a parent image
FROM python:3.11.9-slim

Setting the working directory inside the container
WORKDIR /app

# Copy the dependencies file to the working directory
COPY requirements.txt .

# Install any dependencies
RUN pip install --no-cache-dir -r requirements.txt --default-timeout=100 future

# Copy the current directory contents into the container at /app
COPY . .

# Change the working directory to /app/src
WORKDIR /app/src

# Make ports available to the world outside this container
EXPOSE 7860
EXPOSE 8000

# Install Supervisor and create directory structure
RUN apt-get update &&
apt-get install -y supervisor &&
mkdir -p /app/src/logs

# Copy the supervisord.conf file
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

Start Supervisor service

CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]

'''
#Supervisord
'''
[supervisord]
nodaemon=true

[program:serve]
command=python serve.py
autostart=true
autorestart=true
directory=/app/src

user=root ; Change this to the appropriate user if needed

[program:gradio]
command=python raggpt_app.py
autostart=true
autorestart=true
directory=/app/src

user=root ; Change this to the appropriate user if needed
'''

=haa95532_0 and =pypi_0 preventing pip install -r requirements.txt from installing

Ubuntu wsl
Windows 11

If you remove the =haa95532_0 and =pypi_0 preventing pip install -r requirements.txt from installing you will get a successful install

This file may be used to create an environment using:

$ conda create --name --file

platform: win-64

aiofiles=23.2.1=pypi_0
aiohttp=3.9.1=pypi_0
aiosignal=1.3.1=pypi_0
altair=5.2.0=pypi_0
annotated-types=0.6.0=pypi_0
anyio=3.7.1=pypi_0
asgiref=3.7.2=pypi_0
asyncer=0.0.2=pypi_0
attrs=23.2.0=pypi_0
backoff=2.2.1=pypi_0
bcrypt=4.1.2=pypi_0
beautifulsoup4=4.12.2=pypi_0
bidict=0.22.1=pypi_0
bs4=0.0.1=pypi_0
build=1.0.3=pypi_0
bzip2=1.0.8=he774522_0
ca-certificates=2023.12.12=haa95532_0
cachetools=5.3.2=pypi_0
certifi=2023.11.17=pypi_0
cffi=1.16.0=pypi_0
chainlit=0.7.700=pypi_0
charset-normalizer=3.3.2=pypi_0
chroma-hnswlib=0.7.3=pypi_0
chromadb=0.4.22=pypi_0
click=8.1.7=pypi_0
colorama=0.4.6=pypi_0
coloredlogs=15.0.1=pypi_0
contourpy=1.2.0=pypi_0
curl-cffi=0.5.10=pypi_0
cycler=0.12.1=pypi_0
dataclasses-json=0.5.14=pypi_0
deprecated=1.2.14=pypi_0
duckduckgo-search=4.1.1=pypi_0
fastapi=0.100.1=pypi_0
fastapi-socketio=0.0.10=pypi_0
ffmpy=0.3.1=pypi_0
filelock=3.13.1=pypi_0
filetype=1.2.0=pypi_0
flatbuffers=23.5.26=pypi_0
fonttools=4.47.0=pypi_0
frozenlist=1.4.1=pypi_0
fsspec=2023.12.2=pypi_0
google-auth=2.26.1=pypi_0
googleapis-common-protos=1.62.0=pypi_0
gradio=4.13.0=pypi_0
gradio-client=0.8.0=pypi_0
greenlet=3.0.3=pypi_0
grpcio=1.60.0=pypi_0
h11=0.14.0=pypi_0
httpcore=0.17.3=pypi_0
httptools=0.6.1=pypi_0
httpx=0.24.1=pypi_0
huggingface-hub=0.20.2=pypi_0
humanfriendly=10.0=pypi_0
idna=3.6=pypi_0
importlib-metadata=6.11.0=pypi_0
importlib-resources=6.1.1=pypi_0
jinja2=3.1.2=pypi_0
jsonpatch=1.33=pypi_0
jsonpointer=2.4=pypi_0
jsonschema=4.20.0=pypi_0
jsonschema-specifications=2023.12.1=pypi_0
kiwisolver=1.4.5=pypi_0
kubernetes=28.1.0=pypi_0
langchain=0.0.354=pypi_0
langchain-community=0.0.8=pypi_0
langchain-core=0.1.6=pypi_0
langsmith=0.0.77=pypi_0
lazify=0.4.0=pypi_0
libffi=3.4.4=hd77b12b_0
lxml=5.0.0=pypi_0
markdown-it-py=3.0.0=pypi_0
markupsafe=2.1.3=pypi_0
marshmallow=3.20.1=pypi_0
matplotlib=3.8.2=pypi_0
mdurl=0.1.2=pypi_0
mmh3=4.0.1=pypi_0
monotonic=1.6=pypi_0
mpmath=1.3.0=pypi_0
multidict=6.0.4=pypi_0
mypy-extensions=1.0.0=pypi_0
nest-asyncio=1.5.8=pypi_0
numpy=1.26.3=pypi_0
oauthlib=3.2.2=pypi_0
onnxruntime=1.16.3=pypi_0
openai=0.28.0=pypi_0
openssl=3.0.12=h2bbff1b_0
opentelemetry-api=1.22.0=pypi_0
opentelemetry-exporter-otlp=1.22.0=pypi_0
opentelemetry-exporter-otlp-proto-common=1.22.0=pypi_0
opentelemetry-exporter-otlp-proto-grpc=1.22.0=pypi_0
opentelemetry-exporter-otlp-proto-http=1.22.0=pypi_0
opentelemetry-instrumentation=0.43b0=pypi_0
opentelemetry-instrumentation-asgi=0.43b0=pypi_0
opentelemetry-instrumentation-fastapi=0.43b0=pypi_0
opentelemetry-proto=1.22.0=pypi_0
opentelemetry-sdk=1.22.0=pypi_0
opentelemetry-semantic-conventions=0.43b0=pypi_0
opentelemetry-util-http=0.43b0=pypi_0
orjson=3.9.10=pypi_0
overrides=7.4.0=pypi_0
packaging=23.2=pypi_0
pandas=2.1.4=pypi_0
pillow=10.2.0=pypi_0
pip=23.3.1=py311haa95532_0
posthog=3.1.0=pypi_0
protobuf=4.25.1=pypi_0
pulsar-client=3.4.0=pypi_0
pyasn1=0.5.1=pypi_0
pyasn1-modules=0.3.0=pypi_0
pycparser=2.21=pypi_0
pydantic=2.5.1=pypi_0
pydantic-core=2.14.3=pypi_0
pydub=0.25.1=pypi_0
pygments=2.17.2=pypi_0
pyjwt=2.8.0=pypi_0
pyparsing=3.1.1=pypi_0
pypdf=3.17.4=pypi_0
pypika=0.48.9=pypi_0
pyproject-hooks=1.0.0=pypi_0
pyprojroot=0.3.0=pypi_0
pyreadline3=3.4.1=pypi_0
python=3.11.5=he1021f5_0
python-dateutil=2.8.2=pypi_0
python-dotenv=1.0.0=pypi_0
python-engineio=4.8.1=pypi_0
python-graphql-client=0.4.3=pypi_0
python-multipart=0.0.6=pypi_0
python-socketio=5.10.0=pypi_0
pytz=2023.3.post1=pypi_0
pyyaml=6.0.1=pypi_0
referencing=0.32.0=pypi_0
regex=2023.12.25=pypi_0
requests=2.31.0=pypi_0
requests-oauthlib=1.3.1=pypi_0
rich=13.7.0=pypi_0
rpds-py=0.16.2=pypi_0
rsa=4.9=pypi_0
semantic-version=2.10.0=pypi_0
setuptools=68.2.2=py311haa95532_0
shellingham=1.5.4=pypi_0
simple-websocket=1.0.0=pypi_0
six=1.16.0=pypi_0
sniffio=1.3.0=pypi_0
soupsieve=2.5=pypi_0
sqlalchemy=2.0.25=pypi_0
sqlite=3.41.2=h2bbff1b_0
starlette=0.27.0=pypi_0
sympy=1.12=pypi_0
syncer=2.0.3=pypi_0
tenacity=8.2.3=pypi_0
tiktoken=0.5.2=pypi_0
tk=8.6.12=h2bbff1b_0
tokenizers=0.15.0=pypi_0
tomli=2.0.1=pypi_0
tomlkit=0.12.0=pypi_0
toolz=0.12.0=pypi_0
tqdm=4.66.1=pypi_0
typer=0.9.0=pypi_0
typing-extensions=4.9.0=pypi_0
typing-inspect=0.9.0=pypi_0
tzdata=2023.4=pypi_0
uptrace=1.22.0=pypi_0
urllib3=1.26.18=pypi_0
uvicorn=0.23.2=pypi_0
vc=14.2=h21ff451_1
vs2015_runtime=14.27.29016=h5e58377_2
watchfiles=0.20.0=pypi_0
websocket-client=1.7.0=pypi_0
websockets=11.0.3=pypi_0
wheel=0.41.2=py311haa95532_0
wrapt=1.16.0=pypi_0
wsproto=1.2.0=pypi_0
xz=5.4.5=h8cc25b3_0
yarl=1.9.4=pypi_0
zipp=3.17.0=pypi_0
zlib=1.2.13=h8cc25b3_0

ERROR: Invalid requirement: 'aiofiles=23.2.1' (from line 4 of requirements.txt) Hint: = is not a valid operator. Did you mean == ?

Ubuntu wsl
Windows 11

Need to update to to == to get a clean install

This file may be used to create an environment using:

$ conda create --name --file

platform: win-64

aiofiles=23.2.1
aiohttp=3.9.1
aiosignal=1.3.1
altair=5.2.0
annotated-types=0.6.0
anyio=3.7.1
asgiref=3.7.2
asyncer=0.0.2
attrs=23.2.0
backoff=2.2.1
bcrypt=4.1.2
beautifulsoup4=4.12.2
bidict=0.22.1
bs4=0.0.1
build=1.0.3
bzip2=1.0.8
ca-certificates=2023.12.12
cachetools=5.3.2
certifi=2023.11.17
cffi=1.16.0
chainlit=0.7.700
charset-normalizer=3.3.2
chroma-hnswlib=0.7.3
chromadb=0.4.22
click=8.1.7
colorama=0.4.6
coloredlogs=15.0.1
contourpy=1.2.0
curl-cffi=0.5.10
cycler=0.12.1
dataclasses-json=0.5.14
deprecated=1.2.14
duckduckgo-search=4.1.1
fastapi=0.100.1
fastapi-socketio=0.0.10
ffmpy=0.3.1
filelock=3.13.1
filetype=1.2.0
flatbuffers=23.5.26
fonttools=4.47.0
frozenlist=1.4.1
fsspec=2023.12.2
google-auth=2.26.1
googleapis-common-protos=1.62.0
gradio=4.13.0
gradio-client=0.8.0
greenlet=3.0.3
grpcio=1.60.0
h11=0.14.0
httpcore=0.17.3
httptools=0.6.1
httpx=0.24.1
huggingface-hub=0.20.2
humanfriendly=10.0
idna=3.6
importlib-metadata=6.11.0
importlib-resources=6.1.1
jinja2=3.1.2
jsonpatch=1.33
jsonpointer=2.4
jsonschema=4.20.0
jsonschema-specifications=2023.12.1
kiwisolver=1.4.5
kubernetes=28.1.0
langchain=0.0.354
langchain-community=0.0.8
langchain-core=0.1.6
langsmith=0.0.77
lazify=0.4.0
libffi=3.4.4
lxml=5.0.0
markdown-it-py=3.0.0
markupsafe=2.1.3
marshmallow=3.20.1
matplotlib=3.8.2
mdurl=0.1.2
mmh3=4.0.1
monotonic=1.6
mpmath=1.3.0
multidict=6.0.4
mypy-extensions=1.0.0
nest-asyncio=1.5.8
numpy=1.26.3
oauthlib=3.2.2
onnxruntime=1.16.3
openai=0.28.0
openssl=3.0.12
opentelemetry-api=1.22.0
opentelemetry-exporter-otlp=1.22.0
opentelemetry-exporter-otlp-proto-common=1.22.0
opentelemetry-exporter-otlp-proto-grpc=1.22.0
opentelemetry-exporter-otlp-proto-http=1.22.0
opentelemetry-instrumentation=0.43b0
opentelemetry-instrumentation-asgi=0.43b0
opentelemetry-instrumentation-fastapi=0.43b0
opentelemetry-proto=1.22.0
opentelemetry-sdk=1.22.0
opentelemetry-semantic-conventions=0.43b0
opentelemetry-util-http=0.43b0
orjson=3.9.10
overrides=7.4.0
packaging=23.2
pandas=2.1.4
pillow=10.2.0
pip=23.3.1
posthog=3.1.0
protobuf=4.25.1
pulsar-client=3.4.0
pyasn1=0.5.1
pyasn1-modules=0.3.0
pycparser=2.21
pydantic=2.5.1
pydantic-core=2.14.3
pydub=0.25.1
pygments=2.17.2
pyjwt=2.8.0
pyparsing=3.1.1
pypdf=3.17.4
pypika=0.48.9
pyproject-hooks=1.0.0
pyprojroot=0.3.0
pyreadline3=3.4.1
python=3.11.5
python-dateutil=2.8.2
python-dotenv=1.0.0
python-engineio=4.8.1
python-graphql-client=0.4.3
python-multipart=0.0.6
python-socketio=5.10.0
pytz=2023.3.post1
pyyaml=6.0.1
referencing=0.32.0
regex=2023.12.25
requests=2.31.0
requests-oauthlib=1.3.1
rich=13.7.0
rpds-py=0.16.2
rsa=4.9
semantic-version=2.10.0
setuptools=68.2.2
shellingham=1.5.4
simple-websocket=1.0.0
six=1.16.0
sniffio=1.3.0
soupsieve=2.5
sqlalchemy=2.0.25
sqlite=3.41.2
starlette=0.27.0
sympy=1.12
syncer=2.0.3
tenacity=8.2.3
tiktoken=0.5.2
tk=8.6.12
tokenizers=0.15.0
tomli=2.0.1
tomlkit=0.12.0
toolz=0.12.0
tqdm=4.66.1
typer=0.9.0
typing-extensions=4.9.0
typing-inspect=0.9.0
tzdata=2023.4
uptrace=1.22.0
urllib3=1.26.18
uvicorn=0.23.2
vc=14.2=h21ff451_1
vs2015_runtime=14.27.29016
watchfiles=0.20.0
websocket-client=1.7.0
websockets=11.0.3
wheel=0.41.2
wrapt=1.16.0
wsproto=1.2.0
xz=5.4.5
yarl=1.9.4
zipp=3.17.0
zlib=1.2.13

Input should be a dictionary or an instance of MultiModalFeature [type=dataclass_type, input_value=True, input_type=bool]

Hi, I am getting error while running WEBRAGQuery
Can you help me with it?
This is the error I am getting:

Traceback (most recent call last):
File "", line 198, in run_module_as_main
File "", line 88, in run_code
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Scripts\chainlit.exe_main
.py", line 4, in
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit_init
.py", line 24, in
from chainlit.action import Action
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit\action.py", line 5, in
from chainlit.telemetry import trace_event
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit\telemetry.py", line 12, in
from chainlit.config import config
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit\config.py", line 469, in
config = load_config()
^^^^^^^^^^^^^
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit\config.py", line 438, in load_config
settings = load_settings()
^^^^^^^^^^^^^^^
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\chainlit\config.py", line 408, in load_settings
features_settings = FeaturesSettings(multi_modal= True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\NBK COMPUTER\Downloads\LLM-Zero-to-Hundred-master\LLM-Zero-to-Hundred-master\WebRAGQuery\venv\Lib\site-packages\pydantic_internal_dataclasses.py", line 140, in init
s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
pydantic_core._pydantic_core.ValidationError: 1 validation error for FeaturesSettings
multi_modal
Input should be a dictionary or an instance of MultiModalFeature [type=dataclass_type, input_value=True, input_type=bool]
For further information visit https://errors.pydantic.dev/2.7/v/dataclass_type

RAG-GPT: Number of vectors in vectordb: 0

Hello,
Running "upload_data_manually.py" does not create vectors in vectordb. I have successfully connected the model to azure and the chatbot works fine based on the pretrained model; however, I cannot upload my own data.

Two Sessions not working independently

Hi All,

I tried to open RAG-GPT in 2 browser window one in normal chrome and other on incognito mode. Uploaded 2 different files on two window
1st document was about PYMC MMM
2nd was regrading conjoint

And then I query "summarize this document" in gpt and both the window got same output instead of independent output.

Please find below

Based on the retrieved content, the document "Lab 93 - Bayesian MMM in Python Special Guests_ PyMC Labs.pdf" seems to be a tutorial or presentation on Bayesian Marketing Mix Modelling (MMM) in Python. It includes sections on making an exploratory report and creating line plots, which are likely part of the data analysis process in MMM.

The document "Conjoint tutorial I.pdf" appears to be a guide on conjoint analysis, a popular market research technique. It provides tips for planning ahead, including entering profiles, reproducing examples, reading appendices on deriving insights, computing WTPs (Willingness To Pay), and using the profit simulation spreadsheet. It also advises reading about the good and bad practices of conjoint analysis before optimization.

Please specify which document you want to know more about.

Why it is showing me both the document even they are open in 2 separate tabs?

FYI I only ran raggpt_app.py file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.