Important
This project is still under heavy development and functions might not work well yet. Please don't hestitate to open new issues.
FlowGen is a tool built for AutoGen, a great agent framework from Microsoft Research.
AutoGen streamlines the process of creating multi-agent applications with its clear and user-friendly approach. FlowGen takes this accessibility a step further by offering visual tools that simplify the building and management of agent workflows with AutoGen.
To quickly explore what FlowGen has to offer, simply visit it https://flowgen.app.
Each new commit to the main branch triggers an automatic deployment on Railway.app, ensuring you experience the latest version of the service.
Warning
Changes to Pocketbase project will cause the rebuild and redeployment of all instances, which will swipe all the data.
Please do not use it for production purpose, and make sure you export flows in time.
The project contains Frontend (Built with Next.js) and Backend service (Built with Flask in Python), and have been fully dockerized.
The easiest way to run on local is using docker-compose:
docker-compose up -d
You can also build and run the frontend and backend services separately with docker:
docker build -t flowgen-svc ./backend
docker run -d -p 5004:5004 flowgen-svc
docker build -t flowgen-ui ./frontend
docker run -d -p 2855:2855 flowgen-ui
docker build -t flowgen-db ./pocketbase
docker run -d -p 7676:7676 flowgen-db
(The default port number 2855 is the address of our first office.)
Railway.app supports the deployment of applications in Dockers. By clicking the "Deploy on Railway" button, you'll streamline the setup and deployment of your application on Railway platform:
- Click the "Deploy on Railway" button to start the process on Railway.app.
- Log in to Railway and set the following environment variables:
PORT
: Please set for each services as2855
,5004
,7676
respectively.
- Confirm the settings and deploy.
- After deployment, visit the provided URL to access your deployed application.
If you're interested in contributing to the development of this project or wish to run it from the source code, you have the option to run the frontend and backend services independently. Here's how you can do that:
-
Frontend Service:
- Navigate to the frontend service directory.
- Rename
.env.sample
to.env.local
and set the value of variables correctly. - Install the necessary dependencies using the appropriate package manager command (e.g.,
pnpm install
oryarn
). - Run the frontend service using the start-up script provided (e.g.,
pnpm dev
oryarn dev
).
-
Backend Service:
- Switch to the backend service directory
cd backend
. - Create virtual environment:
python3 -m venv venv
. - Activate virtual environment:
source venv/bin/activate
. - Install all required dependencies:
pip install -r requirements.txt
. - Launch the backend service using command
uvicorn app.main:app --reload --port 5004
.
- Switch to the backend service directory
REPLICATE_API_TOKEN
is needed for LLaVa agent. If you need to use this agent, make sure to include this token in environment variables, such as the Environment Variables on Railway.app.
-
PocketBase:
- Switch to the PocketBase directory
cd pocketbase
. - Build the container:
docker build -t flowgen-db .
- Run the container:
docker run -it --rm -p 7676:7676 flowgen-db
- Switch to the PocketBase directory
Once you've started both the frontend and backend services by following the steps previously outlined, you can access the application by opening your web browser and navigating to:
- Frontend: http://localhost:2855
- Backend: http://localhost:5004 (Here I should provide a Swagger API doc, maybe later.)
- PocketBase: http://localhost:7676
If your services are started successfully and running on the expected ports, you should see the user interface or receive responses from the backend via this URL.
Please check the original notebooks with the same name in AutoGen.
๐ฒ Planned โ Completed ๐ With Issues โญ Out of Scope
Example | Status | Comments |
---|---|---|
auto_feedback_from_code_execution | โ | Feedback from Code Execution |
auto_build | ๐ฒ | |
chess | ๐ฒ | |
compression | ๐ฒ | |
dalle_and_gpt4v | ๐ฒ | This requires the import of custom Agent class |
function_call_async | โ | |
function_call | โ | |
graph_modelling_language | โญ | This is out of project scope. Open an issue if necessary |
group_chat_RAG | โ | This notebook does not work |
groupchat_research | โ | |
groupchat_vis | โ | |
groupchat | โ | |
hierarchy_flow_using_select_speaker | ๐ฒ | |
human_feedback | ๐ฒ | |
inception_function | ๐ฒ | |
langchain | โญ | No plan to support |
lmm_gpt-4v | โ | |
lmm_llava | โ | Depends on Replicate |
MathChat | ๐ฒ | |
oai_assistant_function_call | โ | |
oai_assistant_groupchat | ๐ | Very slow and not work well, sometimes not returning. |
oai_assistant_retrieval | ๐ฒ | |
oai_assistant_twoagents_basic | โ | |
oai_code_interpreter | โ | |
planning | ๐ฒ | |
qdrant_RetrieveChat | ๐ฒ | |
RetrieveChat | ๐ฒ | |
stream | ๐ฒ | |
teachability | ๐ฒ | |
teaching | ๐ฒ | |
two_users | โ | The response will be very long and should set a large max_tokens. |
video_transcript_translate_with_whisper | โ | brew install ffmpeg and export IMAGEIO_FFMPEG_EXE |
web_info | โ | |
cq_math | โญ | This example is quite irrelevant to autogen, why not just use OpenAI API? |
Async_human_input | ๐ฒ | |
oai_chatgpt_gpt4 | โญ | Fine-tuning, out of project scope |
oai_client_cost | โญ | This is a utility tool, not related to flow. |
oai_completion | โญ | Fine-tuning, out of project scope |
oai_openai_utils | ๐ฒ |
This project welcomes contributions and suggestions. Please read our Contributing Guide first.
If you are new to GitHub here is a detailed help source on getting involved with development on GitHub.
The project is licensed under Apache 2.0 with additional terms and conditions.