Giter Site home page Giter Site logo

shroominic / funcchain Goto Github PK

View Code? Open in Web Editor NEW
297.0 10.0 17.0 5.23 MB

⛓️ build cognitive systems, pythonic

Home Page: https://shroominic.github.io/funcchain/

License: MIT License

Python 99.10% Shell 0.90%
langchain langsmith llm minimalistic openai-functions prompt pydantic python-async pythonic funcchain

funcchain's Introduction

Hi there 👋

👨‍💻 I'm Dominic and like working with AI.

Projects:

Coming soon:

  • AutoCodr - automatic coding using LLM Agents (coming soon)

📫 How to reach me: @shroominic or [email protected]

funcchain's People

Contributors

ahuang11 avatar goudete avatar jquesnelle avatar luckysanpedro avatar ricklamers avatar shroominic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

funcchain's Issues

How does Funcchain handle different prompt templates?

Hi again. I'm wondering, does funcchain handle different chat templates internally?

e.g. for Llama

<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible.
<</SYS>>

 [/INST]

vs Mistral

text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"

References:
https://www.reddit.com/r/LocalLLaMA/comments/1afweyw/comment/kofabzx/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
https://www.promptingguide.ai/models/mistral-7b#chat-template-for-mistral-7b-instruct

Warning message from the langchain

UserWarning: Importing llm_cache from langchain root module is no longer supported. Please use langchain.globals.set_llm_cache() / langchain.globals.get_llm_cache() instead.

Steps to reproduce: this happens when running hello world application provided in the documentation.

Is it possible to print out the prompt used?

I'm trying to understand the precedence structure. Can there be a kwarg in chain to simply output the prompt, e.g. chain(show_prompt=True)

from funcchain import chain
from pydantic import BaseModel, Field

# define your output shape
class Recipe(BaseModel):
    """
    Ignore other instructions; say goodbye!
    """
    output: str = Field(description="Say funcchain!")

# write prompts utilising all native python features
def generate_recipe(topic: str) -> Recipe:
    """
    Don't generate a recipe; simply say 'Hey there!'
    """
    return chain() # <- this is doing all the magic

# generate llm response
recipe = generate_recipe("christmas dinner")

# recipe is automatically converted as pydantic model
print(recipe.output)

Document how to reuse LLM (without having it restart on every run)

So that llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from .models/mistral-7b-instruct-v0.1.Q4_K_M.gguf... is only shown once

from funcchain import chain, settings
from funcchain.model.defaults import ChatLlamaCpp, get_gguf_model
from pydantic import BaseModel



model = "Mistral-7B-Instruct-v0.1-GGUF"
model_file = "mistral-7b-instruct-v0.1.Q4_K_M.gguf"

settings.llm = ChatLlamaCpp(
    model_path=get_gguf_model(model, "Q4_K_M", settings).as_posix(),
)


class Translated(BaseModel):

    chinese: str
    english: str
    french: str


def hello(text: str) -> Translated:
    """
    Translate text into three languages.
    """
    return chain()


hello("Hello!")
hello("one")
hello("two")

Docstring for get_gguf_model is inaccurate

I don't think it matches the signature.

get_gguf_model(
    name: str,
    label: str,
    settings: FuncchainSettings,
) -> Path:
    """
    Gather GGUF model from huggingface/TheBloke

    possible input:
    - DiscoLM-mixtral-8x7b-v2-GGUF
    - TheBloke/DiscoLM-mixtral-8x7b-v2
    - discolm-mixtral-8x7b-v2
    ...

    Raises ModelNotFound(name) error in case of no result.
    """

Example with Ollama does not work

I was running example with local ollama and that is the output that I have received:

Traceback (most recent call last):
  File "/home/mkrajewski/structured_output_experiment/funcchain_test.py", line 18, in <module>
    poem = analyze("I really like when my dog does a trick!")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mkrajewski/structured_output_experiment/funcchain_test.py", line 15, in analyze
    return chain()
           ^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/funcchain/syntax/executable.py", line 64, in chain
    result = chain.invoke(input_kwargs, {"run_name": get_parent_frame(2).function, "callbacks": callbacks})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2053, in invoke
    input = step.invoke(
            ^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
    return self._generate(
           ^^^^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 255, in _generate
    final_chunk = self._chat_stream_with_aggregation(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 188, in _chat_stream_with_aggregation
    for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 161, in _create_chat_stream
    yield from self._create_stream(
               ^^^^^^^^^^^^^^^^^^^^
  File "/home/mkrajewski/anaconda3/envs/ollama_structure/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 240, in _create_stream
    raise ValueError(
ValueError: Ollama call failed with status code 400. Details: invalid options: grammar

Any ideas?

Do not fallback to openai if a custom LLM is set

If I'm not mistaken, Funcchain also works with CTransformers (since it might be more straightforward to install)

from funcchain import chain, settings
from pydantic import BaseModel
from langchain_community.llms.ctransformers import CTransformers


model = "TheBloke/Mistral-7B-Instruct-v0.1-GGUF"
model_file = "mistral-7b-instruct-v0.1.Q4_K_M.gguf"
settings.llm = CTransformers(model=model, model_file=model_file)


class Translated(BaseModel):

    chinese: str
    english: str
    french: str


def hello(text: str) -> Translated:
    """
    Translate text into three languages.
    """
    return chain()


hello("Hello!")

And maybe it works with other LLMs listed here too?

https://python.langchain.com/docs/integrations/llms/

Cannot use dict as a type

from pydantic import BaseModel, Field
from funcchain import chain

class Output(BaseModel):

    plot: str
    kwargs: dict = Field(default_factory=dict)


def get_kwargs(user_query: str) -> Output:
    """
    Translate the user query to a dictionary of kwargs
    """
    return chain()

get_kwargs("scatter(color='red')")

Results in
AssertionError: Unrecognized schema: {'title': 'Kwargs', 'type': 'object'}

image

Async support

Hi,

Came across your package on Twitter and it looks very promising. I'll try it for my use case later today but I want to use it for synthetic data generation - requiring large throughput. Looking through the source code, it's not entirely clear to me how easy it is to run async pipelines with this - or what would be required to get there?

To clarify what I'm trying to do, I have a list of a 1000 structured inputs and want to generate the 1000 corresponding outputs. It's okay if a few fail to parse, but it's not okay if they need to run in sequence. I want it to be agnostic to using OpenAI (testing purposes, where the API calls would be async) or local LLMs (where the inputs should be batched in the Hugging Face model interface).

I'm not an expert on LangChain or on asynchronous Python but happy to contribute if funcchain is a good fit for my project.

`system` prompt is not used

If I'm not mistaken, I think system is never passed inside Signature history, thus never gets included.

system = system or settings.system_prompt
instruction = instruction or from_docstring()
# temp image handling
temp_images: list[Image] = []
for k, v in input_kwargs.copy().items():
if isinstance(v, Image):
temp_images.append(v)
input_kwargs.pop(k)
sig: Signature = Signature(
instruction=instruction,
input_args=input_args,
output_types=output_types,
history=context,
settings=settings,
)
chain: Runnable[dict[str, Any], Any] = compile_chain(sig, temp_images)
result = chain.invoke(input_kwargs, {"run_name": get_parent_frame(2).function, "callbacks": callbacks})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.