Giter Site home page Giter Site logo

loom's Introduction

This is an experimental tree-based writing interface for GPT-3. The code is actively being developed and thus unstable and poorly documented.

Features

  • Read mode

    • Linear story view
    • Tree nav bar
    • Edit mode
  • Tree view

    • Explore tree visually with mouse
    • Expand and collapse nodes
    • Change tree topology
    • Edit nodes in place
  • Navigation

    • Hotkeys
    • Bookmarks
    • Chapters
    • 'Visited' state
  • Generation

    • Generate N children with GPT-3
    • Modify generation settings
    • Change hidden memory on a node-by-node basis
  • File I/O

    • Open/save trees as JSON files
    • Work with trees in multiple tabs
    • Combine trees

Demo

ooo what features! wow so cool

Block multiverse mode

Read this for a conceptual explanation of block multiverse interface and demo video

How to use in loom

  1. Click Wavefunction button on bottom bar. This will open the block multiverse interface in the right sidebar (drag to resize).
  2. Write initial prompt in the main textbox.
  3. [Optional] Write ground truth continuation in the gray entry box at the bottom of the block multiverse interface. Blocks in ground truth trajectory will be colored black.
  4. Set model and params in top bar.
  5. Click Propagate to propagate plot the block multiverse
  6. Click on any of the blocks to zoom ("renormalize") to that block
  7. Click Propagate again to plot future block multiverse starting from a renormalized frame
  8. Click Reset zoom to reset zoom level to initial position
  9. Click Clear to clear the block multiverse plot. Do this before generating a new block multiverse.

Hotkeys

Alt hotkeys correspond to Command on Mac

File

Open: o, Control-o

Import JSON as subtree: Control-Shift-O

Save: s, Control-s

Dialogs

Change chapter: Control-y

Preferences: Control-p

Generation Settings: Control-Shift-P

Visualization Settings: Control-u

Multimedia dialog: u

Tree Info: Control-i

Node Metadata: Control+Shift+N

Run Code: Control+Shift+B

Mode / display

Toggle edit / save edits: e, Control-e

Toggle story textbox editable: Control-Shift-e

Toggle visualize: j, Control-j

Toggle bottom pane: Tab

Toggle side pane: Alt-p

Toggle show children: Alt-c

Hoist: Alt-h

Unhoist: Alt-Shift-h

Navigate

Click to go to node: Control-shift-click

Next: period, Return, Control-period

Prev: comma, Control-comma

Go to child: Right, Control-Right

Go to next sibling: Down, Control-Down

Go to parent: Left, Control-Left

Go to previous Sibling: Up, Control-Up

Return to root: r, Control-r

Walk: w, Control-w

Go to checkpoint: t

Save checkpoint: Control-t

Go to next bookmark: d, Control-d

Go to prev bookmark: a, Control-a

Search ancestry: Control-f

Search tree: Control-shift-f

Click to split node: Control-alt-click

Goto node by id: Control-shift-g

Organization

Toggle bookmark: b, Control-b

Toggle archive node: !

Generation and memory

Generate: g, Control-g

Inline generate: Alt-i

Add memory: Control-m

View current AI memory: Control-Shift-m

View node memory: Alt-m

Edit topology

Delete: BackSpace, Control-BackSpace

Merge with Parent: Shift-Left

Merge with children: Shift-Right

Move node up: Shift-Up

Move node down: Shift-Down

Change parent: Shift-P

New root child: Control-Shift-h

New Child: h, Control-h, Alt-Right

New Parent: Alt-Left

New Sibling: Alt-Down

Edit text

Toggle edit / save edits: Control-e

Save edits as new sibling: Alt-e

Click to edit history: Control-click

Click to select token: Alt-click

Next counterfactual token: Alt-period

Previous counterfactual token: Alt-comma

Apply counterfactual changes: Alt-return

Enter text: Control-bar

Escape textbox: Escape

Prepend newline: n, Control-n

Prepend space: Control-Space

Collapse / expand

Collapse all except subtree: Control-colon

Collapse node: Control-question

Collapse subtree: Control-minus

Expand children: Control-quotedbl

Expand subtree: Control-plus

View

Center view: l, Control-l

Reset zoom: Control-0

Instructions

Linux

  1. Make sure you have tkinter installed

    sudo apt-get install python3-tk

  2. Setup your python env (should be >= 3.9.13)

     ```python3 -m venv env```
     ```source env/bin/activate```
    
  3. Install requirements

    pip install -r requirements.txt

  4. [Optional] Set environmental variables for OPENAI_API_KEY, GOOSEAI_API_KEY, AI21_API_KEY (you can also use the settings options)

    export OPENAI_API_KEY={your api key}

  5. Run main.py

  6. Load a json tree

  7. Read :)

Mac

  1. conda create -n pyloom python=3.10
  2. conda activate pyloom
  3. pip install -r requirements-mac.txt
  4. set the OPENAI_API_KEY env variable
  5. python main.py

Docker

(Only tested on Linux.)

  1. [Optional] Edit the Makefile with your API keys (you can also use the settings options)

  2. Run the make targets

     ```make build```
     ```make run```
    
  3. Load a json tree

  4. Read :)

Local Inference with llama-cpp-python

llama.cpp lets you run models locally, and is especially useful for running models on Mac. [https://github.com/abetlen/llama-cpp-python] provides nice installation and a convenient API.

Setup

  1. conda create -n llama-cpp-local python=3.10; conda activate llama-cpp-local
  2. Set your preferred backend before installing llama-cpp-python, as per these instructions. For instance, to infer on MPS: CMAKE_ARGS="-DLLAMA_METAL=on"
  3. pip install 'llama-cpp-python[server]'
  4. pip install huggingface-hub
  5. Now you can run the server with whatever .gguf model you desire from Huggingface, i.e: python3 -m llama_cpp.server --hf_model_repo_id NousResearch/Meta-Llama-3-8B-GGUF --model 'Meta-Llama-3-8B-Q4_5_M.gguf' --port 8009

Inference

  1. conda activate llama-cpp-local and start your llama-cpp-python server.
  2. In a new terminal window, activate your pyloom environment and run main.py
  3. Enter configurations for your local model in Settings > Model config > Add model. By default, the llama-cpp-port-8009 model uses the following settings:
{
            'model': 'Meta-Llama-3-8B-Q4_5_M',
            'type': 'llama-cpp',
            'api_base': 'http://localhost:8009/v1',
},

loom's People

Contributors

fergusfettes avatar gcamilo avatar ksadov avatar metasemi avatar njbbaer avatar nonlinearmoon avatar socketteer avatar tel-0s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loom's Issues

Error handling for "response" in multiverse

Adding a try-except here might be useful, logging that I reached this error condition, on this line.

I can't repro it, think I switched models and race-conditioned or something, but here's the traceback in case someone else has this issue:

Traceback (most recent call last):
  File "/usr/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
    return self.func(*args)
  File "/home/uk000/gh/loom/components/modules.py", line 1999, in propagate
    multiverse, ground_truth, prompt = self.state.generate_greedy_multiverse(max_depth=self.max_depth.get(),
  File "/home/uk000/gh/loom/model.py", line 2471, in generate_greedy_multiverse
    multiverse, ground_truth = greedy_word_multiverse(prompt=prompt, ground_truth=ground_truth, max_depth=max_depth,
  File "/home/uk000/gh/loom/util/multiverse_util.py", line 41, in greedy_word_multiverse
    token[1]['children'], _ = greedy_word_multiverse(prompt + token[0], ground_truth='', max_depth=max_depth-1,
  File "/home/uk000/gh/loom/util/multiverse_util.py", line 41, in greedy_word_multiverse
    token[1]['children'], _ = greedy_word_multiverse(prompt + token[0], ground_truth='', max_depth=max_depth-1,
  File "/home/uk000/gh/loom/util/multiverse_util.py", line 34, in greedy_word_multiverse
    logprobs = response.choices[0]["logprobs"]["top_logprobs"][0]
IndexError: list index out of range

Api key input?

Hi, I know this is dumb...but

I don't know where to input my API..

I tried EXPORT=

I can't figure it out...otherwise, looks amazing! Great work.

Also, do I need tensor installed? I notice the terminal have a warning ⚠️ about it. While the gui launched, everything looked fine to me. Good work

Node Metadata visualization wrong

I'm trying to regenerate the result shown in the first image found in README, but my visualization result looks wrong.

After generating a child, I went to Info -> Node Metadata on toolbar. For the two outputs I generated, the first several tokens are totally white, while the rest of the coloring was apparently wrong since they didn’t mark the whole tokens. I had the same problem across different devices, using different language models. Could this error be fixed?

image

1

2

Deleting nodes break Loom

Attempting to delete any node fails and breaks the Loom UI. I can still click around, but the text window does not update.

Note: this is my first time using Loom.

This is the console output when a node is attempted to be deleted:

loom$ python main.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
opening /home/nate/loom/data/GPT_chat.json
saving tree
Exception in Tkinter callback
Traceback (most recent call last):
  File "/usr/lib/python3.8/tkinter/__init__.py", line 1892, in __call__
    return self.func(*args)
  File "/home/nate/loom/controller.py", line 52, in <lambda>
    return lambda event=None, *args, _f=f, **kwargs: _f(*args, **kwargs)
  File "/home/nate/loom/util/util.py", line 355, in f
    return func(*args, **kwargs)
  File "/home/nate/loom/controller.py", line 684, in delete_node
    self.select_node(next_sibling)
  File "/home/nate/loom/util/util.py", line 355, in f
    return func(*args, **kwargs)
  File "/home/nate/loom/controller.py", line 430, in select_node
    self.write_textbox_changes()
  File "/home/nate/loom/util/util.py", line 355, in f
    return func(*args, **kwargs)
  File "/home/nate/loom/controller.py", line 910, in write_textbox_changes
    if self.state.preferences['editable']:
  File "/home/nate/loom/model.py", line 257, in preferences
    return self.state['preferences']
  File "/home/nate/loom/model.py", line 313, in state
    frames = self.accumulate_frames(self.selected_node)
  File "/home/nate/loom/model.py", line 334, in accumulate_frames
    for ancestor in self.ancestry(node):
  File "/home/nate/loom/model.py", line 540, in ancestry
    return node_ancestry(node, self.tree_node_dict)
  File "/home/nate/loom/util/util_tree.py", line 152, in node_ancestry
    while "parent_id" in node:
TypeError: argument of type 'NoneType' is not iterable

Integration with `transformers`

It would be interesting to be able to use loom with open source LLMs such as GPT-Neo-X, FLAN-UL2, and LLaMA. The transformers library by Huggingface has support for almost every open source LLM through a standardized interface.

One approach to accomplish this could be direct integration. Another approach, to keep the loom client thin, could be to develop (maybe this already exists?) a shim that adapts the OpenAI API shape to a transformers backend

3.5 and gpt4 fail

Thank you for the practical UI, I wonder why there aren't many like this around already (paid and free, local and remote models). Probably we're way ahead of the crowd or something. It's a mystery for me lol

The issue: it seems that I cannot use gpt4 and gpt-3.5 (+turbo) with the standard settings. What works is text-davinci-003. The error I get if I choose gpt4 or 3.5 is the following:

WARNING:root:Failed with exception: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions), Retrying in 1 seconds...
WARNING:root:Failed with exception: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions), Retrying in 2 seconds...
WARNING:root:Failed with exception: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions), Retrying in 4 seconds...
cannot unpack non-iterable NoneType object
ERROR cannot unpack non-iterable NoneType object. Deleting failures

I tried finding the issue in the code, but could not see it (no expert in their API). I you look here: https://platform.openai.com/docs/api-reference/chat/create then it seems that the URL is not the same anymore. They use https://api.openai.com/v1/chat/completions

Thanks for looking into this.

Edit mode: certain shortcuts remain activate which disrupt editing

In editing mode, at least the p shortcut remains active, which toggles the minimap side-pane and deletes whatever you were editing. I have noticed other characters also being treated as shortcuts while editing: b, spacebar... Some only seem to be treated as a shortcut sometimes (within edit mode).

This makes it near impossible to write anything. Perhaps I am misinterpreting how modes work.

Duplication of a node does not copy text

Right click on a node and select duplicate.
observed: An empty node is created as a sibling.
expected: A node with the exact same text is created as a sibling.

Weird Resolution Problem

Hi, just trying out now your software as it looks very promising!
But as soon as I open the Jar file the application seems to be having some issues related to resolution/scale of the UI. After some time I figured out that issue is related to the Windows 11 scale setting, when I put it to 125% or higher it messes up the UI of the app.
Like so (the mouse is where de red square is and its pointing to the Close button on the Settings Window):
image

visualize mode missing / broken

Hi, thanks for sharing this!

I've run into a little issue where visualize mode is missing from my bottom bar

image

I've tried the shortcut, as well as View -> Toggle Visualize Mode, but neither of these worked

TypeError: 'str' object does not support item assignment

This is epic, can't wait to enter the LoomSpace!

I'm experiencing an error where any keyboard input throws a str item assignment error - I suspect this is related to the string being immutable and needing to convert the string into a list.

Can you help me do a workaround for this?

Current Environment:

  • M1 MacOS AArch64
  • Python 3.8.11 (pyenv)
  • Had to remove Tokenizers and install manually as the version doesn't compiled with Rust on Arm64 (I'll do a PR for this separately for other Mac Users)

Full error:

Traceback (most recent call last):
  File "/opt/homebrew/anaconda3/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
    return self.func(*args)
  File "/Users/chiron/workspace/10Weaver/loom/components/modules.py", line 1206, in submit
    self.callbacks["Submit"]["callback"](text=modified_text, auto_response=self.settings().get("auto_response", True))
  File "/Users/chiron/workspace/10Weaver/loom/controller.py", line 49, in <lambda>
    return lambda event=None, *args, _f=f, **kwargs: _f(*args, **kwargs)
  File "/Users/chiron/workspace/10Weaver/loom/util/util.py", line 355, in f
    return func(*args, **kwargs)
  File "/Users/chiron/workspace/10Weaver/loom/controller.py", line 2004, in submit
    new_child = self.create_child(toggle_edit=False)
  File "/Users/chiron/workspace/10Weaver/loom/util/util.py", line 355, in f
    return func(*args, **kwargs)
  File "/Users/chiron/workspace/10Weaver/loom/controller.py", line 563, in create_child
    new_child = self.state.create_child(parent=node)
  File "/Users/chiron/workspace/10Weaver/loom/model.py", line 815, in create_child
    self.rebuild_tree()
  File "/Users/chiron/workspace/10Weaver/loom/model.py", line 36, in wrapper
    output = func(self, *args, **kwargs)
  File "/Users/chiron/workspace/10Weaver/loom/model.py", line 477, in rebuild_tree
    self.tree_node_dict = {d["id"]: d for d in flatten_tree(self.tree_raw_data["root"])}
  File "/Users/chiron/workspace/10Weaver/loom/util/util_tree.py", line 314, in flatten_tree
    child["parent_id"] = d["id"]
TypeError: 'str' object does not support item assignment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.