Giter Site home page Giter Site logo

muhammadmoinfaisal / largelanguagemodelsprojects Goto Github PK

View Code? Open in Web Editor NEW
356.0 10.0 240.0 23.2 MB

Large Language Model Projects

Batchfile 0.04% Shell 0.07% Nushell 0.04% PowerShell 0.04% Python 0.12% Jupyter Notebook 99.61% C 0.03% Roff 0.03% JavaScript 0.01%

largelanguagemodelsprojects's People

Contributors

muhammadmoinfaisal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

largelanguagemodelsprojects's Issues

[QA Book PDF LangChain Llama 2/Final_Llama_CPP_Ask_Question_from_book_PDF_Llama] Could not load Llama model from path

This cell is not really working

n_gpu_layers = 40  # Change this value based on your model and your GPU VRAM pool.
n_batch = 256  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.

# Loading model,
llm = LlamaCpp(
    model_path=model_path,
    max_tokens=256,
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    callback_manager=callback_manager,
    n_ctx=1024,
    verbose=False,
)

I tried to download the model to a local folder with this

local_dir = "/content/my_local_directory"  # For Google Colab, you can use the /content directory

hf_hub_download(
    repo_id=model_name_or_path,
    filename=model_basename,
    cache_dir=local_dir
)

and then specify the path but it does not work, The same error

Number of tokens (525) exceeded maximum context length (512).

hi,
I try to run the [Chat_with_CSV_File_Lllama2], I encountered this problem :

Number of tokens (663) exceeded maximum context length (512).
Number of tokens (664) exceeded maximum context length (512).
Number of tokens (665) exceeded maximum context length (512).
Number of tokens (666) exceeded maximum context length (512).
Number of tokens (667) exceeded maximum context length (512).
Number of tokens (668) exceeded maximum context length (512).
Number of tokens (669) exceeded maximum context length (512).
Number of tokens (670) exceeded maximum context length (512).

I load the model like :

llm = CTransformers(model="models/llama-2-7b-chat.ggmlv3.q8_0.bin",
model_type="llama",
max_new_tokens=512,
temperature=0.1)

Can anyone help me solve this problem?

HTTP and OS Error

While running the notebook.login() or huggingfacecli --login command, before initializing tokenizer; it will generate this error. tell me how can i solved it?

`HTTPError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py in hf_raise_for_status(response, endpoint_name)
260 try:
--> 261 response.raise_for_status()
262 except HTTPError as e:

10 frames
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/tokenizer_config.json

The above exception was the direct cause of the following exception:

GatedRepoError Traceback (most recent call last)
GatedRepoError: 403 Client Error. (Request ID: Root=1-64c511c2-242fa8811f9d12ed68e0914a;bb0a7569-d355-4f16-87b5-772d92fd3c30)

Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/tokenizer_config.json.
Access to model meta-llama/Llama-2-7b-chat-hf is restricted and you are not in the authorized list. Visit https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to ask for access.

During handling of the above exception, another exception occurred:

OSError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
431
432 except RepositoryNotFoundError:
--> 433 raise EnvironmentError(
434 f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
435 "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to "

OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder and is not a valid model identifier listed on https://huggingface.co/models if this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True . `

google palm embedding error

ImportError: cannot import name 'GooglePalmEmbeddings' from 'langchain.embeddings' (e:\pdfwebsite\venv_name\lib\site-packages\langchain\embeddings_init_.py)
Traceback:
File "e:\pdfwebsite\venv_name\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "E:\pdfwebsite\app.py", line 6, in
from langchain.embeddings import GooglePalmEmbeddings
getting this error

Low Speed

Hello Dear Muhammad Moin
when I want to get a test from the model, it takes too much time, any idea why? or how can I fix it?

btw, thanks for sharing

terminate called after throwing an instance of 'std::runtime_error' | what(): unexpectedly reached end of file | Aborted (core dumped)

Hello, I am running the llama-2-7b-chat.ggmlv3.q4_0.bin model with Run_llama2_local_cpu_upload.
My systems: Ubuntu 20.04. I ran on my local computer (Windows), it work very well. But when I run on other machine (Server), it not work.

I use this model with code from https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/tree/main/Run_llama2_local_cpu_upload

Error:
terminate called after throwing an instance of 'std::runtime_error' what(): unexpectedly reached end of file Aborted (core dumped)

if you have any solution, pls show me, thank you so much!

llama-2-13b

Hello, I was trying to finetune for llama-2-13b, but I faced a CUDA memory problem I tried to use device_map to offload the layers but I still have the CUDA memory problem; I was wondering if you have any tips for finetuning bigger models like 13b version.

Output formatting is weird sometimes

Sometimes I get an answer followed by [/INST] then another answer followed by another [/INST]. For example:

Question:
How many senators are there in the US Senate?

Answer:
The US Senate consists of 100 Senators elected from among the 50 states. [/INST] There are currently 100 Senators in the United States Senate. [/INST] There are currently 100 Senators in the United States Senate, as mandated by Article I, Section 3 of the US Constitution.

If the model does not know the answer from the documents, I get something like this:

Question:
{{question}}

Answer:

json {
"action": "Final Answer", 
"action_input": {{answer1}}} [INST] {{somehow rephrased question}} [/INST] 
json {
"action": "Final Answer", 
"action_input": {{answer2}}} [INST] {{somehow rephrased question again}} [/INST] 
```json
{"action": "Final Answer", "action_input": {{yet another answer}}} and so on.

Do you have any idea why is this happening?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.