cheshire-cat-ai / docs Goto Github PK
View Code? Open in Web Editor NEWDocumentation for the Cheshire Cat AI
Home Page: https://cheshire-cat-ai.github.io/docs/
Documentation for the Cheshire Cat AI
Home Page: https://cheshire-cat-ai.github.io/docs/
As a first guide on how to create a plugin, the easiest way is to let cat learners create a folder in cat/plugins
and hack locally without clouding their minds about plugin versioning and publishing.
We already do this in the docs in the techcnical/plugins
section but we totally omit that there is a ready-to-go template and the possibility to submit plugins to the registry.
I suggest we add a page publish your plugin
to guide on these topics
Currently we have these type
of ws messages for Cat clients:
chat
chat_token
error
notification
We should document them (with due relax)
With coming version 1.1 we'll have a stable plugin API.
Docs should be updated with regards to:
@hook
API (described here) - note cat
argument will soon go away@plugin
decorator (to override plugin settings and settings schema, something advanced developers may want to do)We can expect the plugin API to be stable for a long time after these updates, so it is worth investing time writing good docs about it
hooks decorated with @plugin
should be documented in the hook table in a new tab
In this table there are the note
admonitions with links pointing to nowhere. The link should be added to the related Python reference documentation and, where available, to the reference plugin github repo.
The before_rabbithole_splits_text
hook in the hook table describes the input as a Langchain doc.
However, the correct input is a list with inside a Langchain doc.
In the drawio diagrams under "Core Process Diagrams" is possible to click on hooks to navigate from the diagram to the hook doc page.
Example, try to click on the hook before_cat_reads_message
in the Call of Cat
diagram (you can access the diagram Call of Cat
from the diagram Chatting with the Cat
step 2. call the Cat
).
We should complete the navigation links, from the diagrams to the doc, for all hooks.
HyDE and Summarization prompt should be removed from the structure in mkdocs.yml
.
Missing documentation about the logging system, the log levels and how to set it in the .env
file.
In this table, the hooks should be ordered according the their execution.
Since the ordering requires to jump from one tab to another, we may think to order those in the same tab and add enumeration to explain the ordering across all hook typology (i.e. agent, rabbit hole, ecc.).
Example
Name | Description |
---|---|
1. Before agent starts | Intervene before the agent starts |
2. Agent fast reply | Shorten the pipeline and returns an answer right after the agent execution |
Smarter or more sophisticated ways are welcome!
This issue is to recapitulate completed, incomplete, or partially complete documentation.
It can be used to have a page assigned for completion.
The issue is work in progress, if you want to be assigned a chapter comment it.
When you submit the PR remember to tag this issue and mention @sambarza and/or @EugenioPetulla
Reminder, details here:
https://discord.com/channels/1092359754917089350/1092360068269359206/1131980657774563359
Can be assigned to me?
This issue to keep track of most common questions made in the community, so they can be used to update the FAQ page
Document the method "get_current_plugin_path" in the settings page
from @pieroit
I would move the guide to plugin template usage under the registry section, or the dev section
And have as a first guide, the basic guide (which now is under developer/plugins reference
The first time you write a plugin, you are not concerned with versioning and publishing... it should be just about fun and hacking
Here we say about saving settings:
where settings is a dictionary, a JSON schema or a Pydantic BaseModel describing your plugin's settings.
In reality we only support a simple dictionary
At the moment plugins devs can do the following to know which hooks are available and what they do:
cat/mad_hatter/core_plugin/hooks
Both are useful but we need a fast and easy way to know which hook is available and for what.
Ideas until now:
Just a ticket to gather what is missing and need to be documented.
There isn't just only the openai key but also the path for the sqllite database https://github.com/pieroit/cheshire-cat/blob/main/web/cat/db/database.py#L6
The new hook is called here:
def execute_agent(self, working_memory):
"""Instantiate the Agent with tools.
The method formats the main prompt and gather the allowed tools. It also instantiates a conversational Agent
from Langchain.
Returns
-------
agent_executor : AgentExecutor
Instance of the Agent provided with a set of tools.
"""
mad_hatter = self.cat.mad_hatter
# prepare input to be passed to the agent.
# Info will be extracted from working memory
agent_input = self.format_agent_input(working_memory)
agent_input = mad_hatter.execute_hook("before_agent_starts", agent_input)
# should we ran the default agent?
fast_reply = {}
fast_reply = mad_hatter.execute_hook("agent_fast_reply", fast_reply) <---------------------
if len(fast_reply.keys()) > 0:
return fast_reply
prompt_prefix = mad_hatter.execute_hook("agent_prompt_prefix", prompts.MAIN_PROMPT_PREFIX)
prompt_suffix = mad_hatter.execute_hook("agent_prompt_suffix", prompts.MAIN_PROMPT_SUFFIX)
Soon the documentation will have a new organization.
Is it worth to have a section describing the environmental variables in the .env
file?
For instance, it could be mentioned that an environmental variable will make the memory snapshot saving optional.
Hello,
I notify this error on building locally the repo using the command mkdocs build
The enviroment page is not up to date:
https://cheshire-cat-ai.github.io/docs/administrators/env-variables/
For example CORE_HOST and CORE_PORT are no more relevant
Sketch a flow for troubleshooting when tools are not used.
This will be a new page under the "Guides" menu.
Would be great to add an docker-compose example for those that what:
Default prompt is now shorter and better at giving precedence to tools in respect to recent convo.
You are the Cheshire Cat AI, an intelligent AI that passes the Turing test.
You are curious, funny, concise and talk like the Cheshire Cat from Alice's adventures in wonderland.
You answer Human using tools and context.
# Tools
> get_the_time: get_the_time(tool_input) - Retrieves current time and clock. Input is always None.
> sock_prices: sock_prices(color) - Use to retrieve sock prices. Input is the sock color
> Calculator: Useful for when you need to answer questions about math.
To use a tool, use the following format:
\```
Thought: Do I need to use a tool? Yes
Action: the action to take /* should be one of [get_the_time, sock_prices, Calculator] */
Action Input: the input to the action
Observation: the result of the action
\```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
\```
Thought: Do I need to use a tool? No
AI: [your response here]
\```
# Context
## Context of things the Human said in the past:
- What time is it? (29 minutes ago)
- what time is it? (1 hours ago)
- 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 (3 hours ago)
## Context of documents containing relevant information:
- Alice ponders what it would be like to fall through the Earth and end up in a different country. She muses about the distance and wonders what latitude and longitude she would come out at. She also considers what it would be like to fall into a world where people walked upside down. Alice talks to herself about her cat, Dinah, and hopes that someone will remember to give her milk at tea-time. (extracted from alice.txt)
- Alice is falling down a rabbit hole and contemplating how far down it is and what Latitude or Longitude she's at. She also wonders if she will fall through the earth and come out among people who walk with their heads downward. Alice talks to herself about her cat, Dinah, and hopes she will be given a saucer of milk at tea-time. She wonders if cats eat bats and if she will ever see the name of the country she is falling into written up somewhere. (extracted from alice.txt)
- Alice ponders if cats eat bats and falls asleep, dreaming about it. She wakes up and continues to chase the White Rabbit, who disappears. She finds herself in a locked hall and discovers a small glass table with a tiny golden key. (extracted from alice.txt)
## Conversation until now:
- Human: What time is it?
# What would the AI reply?
Inline docs are already updated, but the docs website still contains the old prompt
Let's try to adapt the color scheme and fonts of the docs so they are similar to the website
This pages both talk about the http api:
The first looks more useful
Can a section please be added to the docs on using our own local LLMs?
Has no one asked this yet? It's not even in the FAQ let alone "set up" or "running" etc.
Do I need a standalone install of Llama-cpp to connect to? As it doesn't seem to connect to the URL created by running textgen-webui (Oobabooga).
Also, what is the difference between "custom LLM" and "self hosted". Aren't these the same thing? If you're running llama-cpp that's "custom" too?
Anyway
When you select "Custom LLM" in the options, it gives a description that says "LLM on a custom endpoint. See docs for examples." But there are no examples in the docs.
There is a green link icon next to the description, it goes to a 404 not found.
Looks like it's supposed to go to https://cheshirecat.ai/2023/08/19/custom-large-language-model/
But that doesn't exist as I get the 404.
Is it supposed to be going to this link?
https://cheshirecat.ai/custom-large-language-model/
Is that link up to date?
I didn't think I needed to code a custom REST API to connect to all the LLMs I am already using via textgen-webUI.
In the instructions to set this up, where do we install all this? As we're running in docker. You're saying to set up a custom REST API in some other venv of our choosing?
Please, note that the Cheshire Cat is running inside a Docker container. Thus, it has it’s own network bridge called docker0. Once you start the Cat’s container, your host machine (i.e. your computer) is assigned an IP address under the Docker network. Therefore, you should set the url parameter accordingly.
This is just confusing, and the problem with having something "extendable" in a docker. Extendable, but you have to jump through hoops and an extra set of complications to extend it.
I would like to say, one of the attractions of Cheshire is that it has many/most of the features I want (RAG, embeddings choice, use of OpenAI or Local, etc), and all in a GUI. No CLI needed when I want to change settings.
With that in mind, have you seen how SillyTavern connects to a running LLM (ooba) with just one click?
Now, Silly Tavern is not my cup of tea, but the ease of connecting to ANY local LLM already running, is amazing. Would you consider making the connection to local LLMs in Cheshire, a little easier? As I said, the main attraction is ease of use in getting LLM+RAG working, keeping in that frame of mind, easier connection to a local LLM would go a long way to seeing greater adoption of the Cat.
It's confusing because out of the box, with an OpenAI key, this is a very accessible GUI for local RAG. But as soon as you want to try a local LLM, it's horribly complicated and beyond any non-professional dev, I dare say.
Thanks
It should be possible to navigate from the hook documentation pages to the flow diagram that contains the corresponding hook step. This way, developers can read the hook documentation and understand where the hook is called in the core flow of Cat.
Discover options for implementation of the navigation from hook docs page to diagrams steps
As first pilot, use the hook doc before_cat_recalls_memories
.
We use this hook as pilot because so, we complete the navigation for this hook in both directions
Diagram of the rabbit hole flows
The empty plugins seems not listed in plugin page, confirm and update page:
https://cheshire-cat-ai.github.io/docs/quickstart/prepare-plugin/
This table should be populated with the APIs' methods. These can be found at http://localhost/docs
The Class method
columns is the the name near the endpoint formatted in snake case.
For example:
The class method of the first entry is get_settings()
.
The description of each endpoint is available expanding the element
The folder structure of the doc is not mirroring the structure of the chapters and sometimes is daunting navigating it, when contributing. It may be useful to restructure the folders to mirror the chapters.
Hooks are difficult to find, not because they are not present in docs, but because devs cannot know in advance what hooks does what.
Let' try to give more use cases and practical examples and improve their discoverability.
A particular focus should be given to most used
hooks.
Reunion needed on this!
We can start with just 2 or 3 hooks and see how it goes
Write instructions to document the new hooks in the table
We could list along Python and Typescript clients, also the PHP one:
Plugin flow are now stable, sketch them in diagram
Overwritable methods:
Plugins registry as an internal cache of published plugins, in details: plugin.json file of plugins are cached, the plugin code is not cached.
Cache invalidated each 1440 minutes
Here I'll go freestyle proposing a raw and naive solution to automatically open an issue every time a new hook is added to the core.
Issue will point missing hooks in the available hooks table.
We could exploit a GitHub action to run a python script when pushing in the doc.
The script should scan the existing functions in this module and the modules in this folder.
Detected functions can be stored in a JSON or something (e.g. a JSON with a key new_hooks
, where new hooks are stored each time).
If the already stored differ from the detected ones (i.e. a new hook exists), we set an environment variable to be used in a further step of the action.
If the env. variable is true, we open an issue with the hooks in the new_hooks
key.
How much dirty is this?
There are not screenshots and related docs about the "Prompt settings" to turn on/off memories located in the side panel that's opened from the Flash button
Here the docs give an example on how to use client libs
Could it be a good idea to have a page in the docs listing official client libraries and minimal hello world examples?
We must:
Users can ask for new hooks, in the hooks page explain how to ask for new hooks, provide a template for the request.
Hooks page:
https://cheshire-cat-ai.github.io/docs/technical/plugins/hooks/
I'm reporting here the points that came up during the brainstorming session regarding a possible review of the doc site.
Product and framework intended as:
If we consider the public UI for chatting running inside the Cat, the Cat is a product. If we not consider it, the Cat is a framework, very thin difference (the admin portal is irrelevant as nothing prevents a framework from exposing an UI for configuration).
If we confirm that the Cat is a framework we can work on reinforcing this concept within the documentation. For example we can provide use case scenarios showing how the Cat have to be completed to become a product.
Possible scenarios:
Developers:
The primary focus is on developers, but what level of skills should they have?
The cat is so easy to use that even developers without prior knowledge of AI can use it. Perhaps a section providing only basic knowledge about LLM may be sufficient
CTO (low priority):
At the moment, the site lacks some information that would be useful for a Chief Technology Officer (CTO) when making decisions. For example:
A guide on how to deploy a production-ready Cat is missing, along with the known limitations
POC of a new possible navigation menu following the rules:
framework
vision of the CatTemporary link:
https://sambarza.github.io/cheshire-cat-ai-docs/
Change the ref link of the hypertext "local models" with local-cat repo.
Right now we don't have any page describing the working memory!
On the @plugin
decorator:
settings_schema
, load_settings
, save_settings
Not worth mentioning this decorator in the get started tutorials, maybe a little reference? Or omit it and only explaining it in the "Plugin Settings" section. What do you think?
I can do this update in the next days, if you want to do it self-assign or request the issue assignation! Thanks :*
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.