Giter Site home page Giter Site logo

monadical-sas / morpheus Goto Github PK

View Code? Open in Web Editor NEW
25.0 7.0 3.0 89.83 MB

Morpheus is an open-source project that offers a creative and innovative platform for generating stunning artworks using image editing and stable diffusion models.

Home Page: https://morpheus.monadical.io/

License: GNU General Public License v3.0

Shell 0.08% HCL 1.28% Python 14.00% SCSS 7.62% Dockerfile 0.26% TypeScript 75.72% JavaScript 0.69% Smarty 0.29% CSS 0.04% Mako 0.02%
celery collaborative fastapi generative-art helm huggingface huggingface-diffusers kubernetes nextjs python3

morpheus's Introduction

Morpheus
A WebApp to generate artwork with stable diffusion models.

▶️ Quickstart | Demo | GitHub | Documentation | Info & Motivation | Similar Projects | Roadmap

 


Morpheus is an open-source project that offers a creative and innovative platform for generating stunning artworks using image editing and stable diffusion models.

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

Key Features

  • Freedom & Open Source: Create your personal server and maintain complete control over your data.
  • Robust & Modular Design: Our design supports and loads multiple models with ease.
  • Versatility: We offer Image-to-Image support, Controlnet, and Pix2pix among other use cases.
  • Local or Cloud Setup: Choose the setup that suits your needs best.
  • Infrastructure as Code: We've open-sourced our infrastructure code to facilitate scalability.




grassgrass

Quickstart

🖥  Supported OSs: Linux, macOS (M1-M2)   👾

Prequisites:

Before starting, ensure you have the following:

  1. Firebase: Morpheus uses firebase to manage authentication, analytics, and the collaborative painting system. Follow the Firebase Integration Guide to enable the required firebase services.

  2. Docker & Docker Compose: Both Docker and Docker-Compose should be installed on your system. If not installed yet, you can download Docker from here and Docker-Compose from here. Skip this step if you're using a Kubernetes setup.

  3. Hardware: The hardware requirements for this setup will vary depending on the specific application that you are developing. However, it is generally recommended to have a machine with at least 16 GB of RAM and a GPU with 16 GB of VRAM. If you don't have a GPU, you can use a cloud provider like AWS, GCP, or Azure. Skip this step if you're using a Kubernetes setup.

Setup Steps

Step 1: Verify that Docker is installed and running

Ensure that Docker and Docker-Compose are installed on your system. If not, install them using the links above. For Linux users, you might also need to add your user to the docker group. For more information, see the Docker documentation for more details.

Step 2: Clone the repository

Clone the repository to your local system using the command below:

git clone [email protected]:Monadical-SAS/Morpheus.git

Step 3: Navigate to the Morpheus directory

Change your current directory to the cloned repository:

cd Morpheus

Step 4: Create a secrets file

Create the secrets file by copying the distributed .env files:

cp -p morpheus-server/secrets.env.dist morpheus-server/secrets.env
cp -p morpheus-client/env.local.dist morpheus-client/.env.local
cp -p morpheus-client/env.local.dist morpheus-admin/.env.local

Step 5: Edit the morpheus-server/secrets.env file with your values

This step involves editing the secrets.env file to provide the application with the necessary secrets to connect to the database, Firebase, AWS, and other services.

Parameters:

  • POSTGRES_USER: The username for the PostgreSQL database.
  • POSTGRES_DB: The name of the PostgreSQL database.
  • POSTGRES_PASSWORD: The password for the PostgreSQL database.
  • POSTGRES_HOST: The hostname or IP address of the PostgreSQL database server.
  • POSTGRES_PORT: The port number of the PostgreSQL database server.
  • PGADMIN_DEFAULT_EMAIL: The email address for the PGAdmin default user.
  • PGADMIN_DEFAULT_PASSWORD: The password for the PGAdmin default user.
  • FIREBASE_PROJECT_ID: The ID of your Firebase project.
  • FIREBASE_PRIVATE_KEY: The private key for your Firebase project.
  • FIREBASE_CLIENT_EMAIL: The client email for your Firebase project.
  • FIREBASE_WEB_API_KEY: The web API key for your Firebase project.
  • AWS_ACCESS_KEY_ID: The access key ID for your AWS account.
  • AWS_SECRET_ACCESS_KEY: The secret access key for your AWS account.
  • IMAGES_BUCKET: The name of the S3 bucket where images are stored.
  • MODELS_BUCKET: The name of the S3 bucket where models are stored.
  • IMAGES_TEMP_BUCKET: The name of the S3 bucket where temporary images are stored.
  • ENVIRONMENT: The environment where the application is running. This can be local, staging, or production.

Instructions:

  1. Open the secrets.env file in your preferred text editor.
  2. Replace all content with the following:
POSTGRES_USER=morpheus
POSTGRES_DB=morpheus
POSTGRES_PASSWORD=password
POSTGRES_HOST=postgres
POSTGRES_PORT=5432

# PGAdmin credentials
[email protected]
PGADMIN_DEFAULT_PASSWORD=admin

FIREBASE_PROJECT_ID="XXXXXXX="
FIREBASE_PRIVATE_KEY="XXXXXXX="
FIREBASE_CLIENT_EMAIL="XXXXXXX="
FIREBASE_WEB_API_KEY="XXXXXXX="

# AWS credentials
AWS_ACCESS_KEY_ID=XXXXXXX=
AWS_SECRET_ACCESS_KEY=XXXXXXX=

# S3 BUCKETS ID
#------------------------
# Bucket name where images are stored
IMAGES_BUCKET="XXXXXXX="
# Bucket name where models are stored
MODELS_BUCKET="XXXXXXX="
# Bucket/Folder where temporal images are stored
IMAGES_TEMP_BUCKET="ßXXXXXXX="

# App config
ENVIRONMENT="local"
  1. Update the values with your own.
  2. Save the file.

Once you have updated the secrets.env file, the application will be able to connect to the database, Firebase, AWS, and other services using the secrets that you provided.

Step 6. Edit the morpheus-client/.env.local and morpheus-admin/.env.local files with your values

This step involves editing the .env.local files in the morpheus-client and morpheus-admin directories to provide the client application with the necessary values to connect to the backend API, Firebase, and other services.

Parameters:

  • NEXT_PUBLIC_API_URL: The URL of the backend API.
  • NEXT_PUBLIC_FIREBASE_CONFIG: The Firebase configuration for your Firebase project. This can be found in the Firebase console.
  • NEXT_TELEMETRY_DISABLED: A boolean value indicating whether to disable telemetry.
  • NEXT_PUBLIC_BACKEND_V2_GET_URL: The URL of the Excalidraw backend v2 GET endpoint.
  • NEXT_PUBLIC_BACKEND_V2_POST_URL: The URL of the Excalidraw backend v2 POST endpoint.
  • NEXT_PUBLIC_LIBRARY_URL: The URL of the Excalidraw libraries page.
  • NEXT_PUBLIC_LIBRARY_BACKEND: The URL of the Excalidraw library backend.
  • NEXT_PUBLIC_WS_SERVER_URL: The URL of the WebSocket server.
  • NEXT_PUBLIC_GOOGLE_ANALYTICS_ID: The Google Analytics ID for the client application.
  • FAST_REFRESH: A boolean value indicating whether to enable Fast Refresh.

Instructions:

  1. Open the .env.local file in your preferred text editor.
  2. Replace all content with the following:
# API CONFIG
NEXT_PUBLIC_API_URL=http://localhost:8001

# Firebase configuration
NEXT_PUBLIC_FIREBASE_CONFIG='{"apiKey":"XXXXXXXXXXXX","authDomain":"xxxxxxxxxx.firebaseapp.com","projectId":"xxxxxxx","storageBucket":"xxxxxx.appspot.com","messagingSenderId":"123456789","appId":"1:123456789:web:xxxxxx","measurementId":"G-XXXXXXXXXX"}'

# NEXT CONFIG
NEXT_TELEMETRY_DISABLED=1

# EXCALIDRAW CONFIG
NEXT_PUBLIC_BACKEND_V2_GET_URL=https://json-dev.excalidraw.com/api/v2/
NEXT_PUBLIC_BACKEND_V2_POST_URL=https://json-dev.excalidraw.com/api/v2/post/
NEXT_PUBLIC_LIBRARY_URL=https://libraries.excalidraw.com
NEXT_PUBLIC_LIBRARY_BACKEND=https://us-central1-excalidraw-room-persistence.cloudfunctions.net/libraries
NEXT_PUBLIC_WS_SERVER_URL=ws://localhost:3002
NEXT_PUBLIC_GOOGLE_ANALYTICS_ID=G-XXXXXXXXXX
FAST_REFRESH=false
  1. Update the values with your own.
  2. Save the file.

Once you have updated the .env.local file, the client application will be able to connect to the backend API, Firebase, and other services using the values that you provided.

Step 7. Init the database

To initialize the database, apply the migrations by executing the following commands:

docker compose run --rm datalib alembic upgrade head

Step 8. Run the Morpheus project

To run the Morpheus project, execute the following command:

docker compose up

The Morpheus project will be running on your local machine at localhost:3000 (client), localhost:3001 (admin), and localhost:8001 (api). Morpheus uses some Stable Diffusion models by default, if you want to change the setup you can do it from the administration panel.

Development

Running the backend tests

  • To run all the tests: docker compose run --rm api pytest.
  • To run a specific test: docker compose run --rm api pytest tests/«test_module».py.
  • To run a specific test function: docker compose run --rm api pytest tests/«test_module».py::«test_function».

Running the migrations

To use morpheus-data image to run the migrations, you need to create a secrets.env file in the morpheus-server directory. For more information, you can read the morpheus-data README.

  • To create a migration:
docker compose run --rm datalib alembic revision --autogenerate -m "Initial migration"
  • To migrate or update to the head:
docker compose run --rm datalib alembic upgrade head

PG admin

PGadmin is available in: localhost:8002. The user and password must be added in secrets.env file.

example values

[email protected]
PGADMIN_DEFAULT_PASSWORD=password

Implement changes in morpheus-data

morpheus-data is a Python library that provides a unified interface for managing all ORM-related operations in Morpheus. Other Morpheus microservices can import and use it.

To run Morpheus locally using morpheus-data as a library:

# Building in separate steps
#---------------------------------------------
# build morpheus-data wheel
docker compose build datalib

# build morpheus-server api
docker compose build api

# Building alltogether
#---------------------------------------------
docker compose build

# Run
#---------------------------------------------
docker compose up

Note: You need to build morpheus-data and the morpheus-server API service (or any other microservice that uses it) every time you make a change to morpheus-data. This is necessary because you need to rebuild the wheel file and install it in the morpheus-server API service or any other service that uses it. For more information, see the morpheus-data README.

Adding a new dependency to the backend

  1. Add the new dependency directly to the respective requirements.txt file.
  2. Update the docker image: docker compose build api.
  3. Run the image: docker compose up.

Note: This project uses requirements.lint.txt only for caching in CI workflow linting job.

Adding a new dependency to the frontend

# Add a new dependency to the npm package.json file
yarn install <dependency>

# Update the docker image
docker compose build client

# Run the image
docker compose up

Add new diffusion models

There are two ways to add new diffusion models to Morpheus:

  • Using the admin panel
  • Using the command-line interface (CLI)

Using the admin panel

To use the admin panel, go to localhost:3001. For more details on how to use the admin panel, see here.

Using the CLI

To use the CLI, you must first build the Morpheus server:

docker compose --profile manage build

#or 

docker compose build model-script

Once the model-script is built, you can use the morpheus-server/scripts/models/cli.py script to add, list, and delete models. You can also use the morpheus-server/scripts/models/models-info.yaml file to specify information about the models.

# To show the help
docker compose run --rm model-script --help

# To add/update a new model
docker compose run --rm model-script upload <server> <target>

# To list content of S3 bucket
docker compose run --rm model-script s3 list

# To list content of db from a specific api server
docker compose run --rm model-script db list <server> --target <target>

# To add models to the s3 bucket
docker compose run --rm model-script s3 register <target>

# To add model to the db of a specific api server
docker compose run --rm model-script db register <server> <target>

# To update model in the db of a specific api server
docker compose run --rm model-script db update <server> <target>

# To delete a model from s3, db and local
docker compose run --rm model-script delete <model-source> --api-server <server> --target <target>

# To delete a model from s3
docker compose run --rm model-script s3 delete <model-source>

# To delete a model from db of a specific api server
docker compose run --rm model-script db delete <model-source> <server> --target <target>

To use the admin panel, go to localhost:3001. For more details on how to use the admin panel, see here.

Adding a new feature

If you want to add a new feature, you should follow the next steps:

  • Choose an issue from the issues list or create a new one
  • Assign yourself to the issue
  • Create a new branch from the main branch
  • Make your changes
  • Write tests for your changes
  • Make sure to run the QA tools and the tests
  • Push your changes
  • Create a pull request
  • If the pull request includes frontend changes, you should also upload some screenshots of the changes
  • Request a review from a team member

Important

Before pushing your changes, make sure to run the QA tools and the tests

# Run the QA tools
docker compose run --rm api flake8 --max-line-length 120 --exclude app/migrations/ .
docker compose run --rm api black --line-length 120 --exclude app/migrations/ .
  
# Run the tests
docker compose run --rm api pytest

If all the checks pass, you can push your changes

Production Setup

Configuring k8s cluster

Some templates have been included to help create the Kubernetes cluster and the necessary infrastructure. For more information on configuration, see this link: link.

To configure Terraform, please follow these steps:

  • Create a new SSL certificate using the ACM (Amazon Certificate Manager) service in AWS to obtain the ARN (Amazon Resource Name). This certificate will be used to secure the Morpheus web application. The ARN (Amazon Resource Name) of the certificate is a unique identifier that you will need in the next step.
  • Create a DB secret using the "Secrets Manager" service in the AWS console. The secret should be an "Other type of secret". The value must be in this format: {"username":"username","password":"xxxxxxxxxxxxxxxxx"}. Save the secret name for the next steps.
  • Create a terraform.tfvars file in the ./infra/envs/staging/ folder with the information obtained from your AWS account. Use the ARN for the arn_ssl_certificate_cf_distribution field and the DB secret name for db_password_secret_manager_name. Additionally, update cname_frontend with a domain that you manage. Here is an example:
AWS_ACCESS_KEY = ""
AWS_SECRET_KEY = ""
ACCOUNT_ID = "xxxxxxxxxxx"
db_password_secret_manager_name = "morpheus_db_password"
arn_ssl_certificate_cf_distribution = "arn:aws:acm:us-east-1:xxxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxx"
cname_frontend = "morpheus.web.site"
vpc_cidr = "172.21.0.0/16"
vpc_public_subnets = ["172.21.0.10/24", "172.21.0.11/24"]
vpc_private_subnets = ["172.21.0.12/24", "172.21.0.13/24"]
db_allocated_storage = 20
self_managed_gpu_nodes_device_size = 30
region = "us-east-1"

where:

  • AWS_ACCESS_KEY and AWS_SECRET_KEY are your AWS credentials. You can obtain these credentials from the AWS Management Console.
  • ACCOUNT_ID is your AWS account ID. You can find this information in the AWS Management Console.
  • db_password_secret_manager_name is the name of the DB secret that you created in step 2.
  • arn_ssl_certificate_cf_distribution is the ARN of the SSL certificate that you created in step 1.
  • cname_frontend is the domain name that you want to use for the Morpheus web application.
  • vpc_cidr is the CIDR block for the VPC that Terraform will create.
  • vpc_public_subnets and vpc_private_subnets are the CIDR blocks for the public and private subnets that Terraform will create.
  • db_allocated_storage is the size of the EBS volume that Terraform will create for the Morpheus database.
  • self_managed_gpu_nodes_device_size is the size of the GPU for the self-managed GPU nodes that Terraform will create.
  • region is the AWS region where you want to create the Kubernetes cluster and the necessary infrastructure.

To manage Terraform backends, follow these steps:

  1. Create an S3 bucket to manage the Terraform backends.
  2. Create a backend.conf file in ./infra/envs/staging/ based on backend.conf.dist. Make sure to update the route if you prefer to use a different one.
bucket = "morpheus-infra-backend"
key = "env/staging/infra/state.tfstate"
region = "us-east-1"
  1. Create the cluster:
  • Navigate to the ./infra/envs/staging directory. This is where the Terraform configuration files for the staging environment are located.
cd ./infra/envs/staging
  • Initialize Terraform. This command will download the necessary modules and providers.
terraform init -backend-config=backend.conf
  • Apply the Terraform configuration. This command will create the Kubernetes cluster and the necessary infrastructure. You will be prompted to confirm the changes before they are applied.
terraform apply
  • Save the Terraform outputs to a separate file.
terraform output > outputs.tfvars

The Terraform outputs are the values of the variables that Terraform created when it applied the configuration. These values can be useful for configuring other applications, such as kubectl. Saving the outputs to a file will make it easy to access them in the next step, when you create the kubectl configuration file.

  • Create a kubectl configuration file to access the cluster. Use the Terraform outputs to complete the arguments for the region and cluster name.
aws eks --region us-east-1 update-kubeconfig --name cluster-name-from-outputs
  • The kubectl configuration file is used to tell kubectl how to connect to the Kubernetes cluster.
  • The aws eks update-kubeconfig command will create a kubectl configuration file for the specified cluster in the specified region.
  • You will need to replace cluster-name-from-outputs with the name of the Kubernetes cluster that was created by Terraform. Once you have completed these steps, you will be able to use kubectl to manage the Kubernetes cluster.

Installing helm charts - Nginx ingress

  1. Create a backend.conf file in the ./infra/charts/staging/ folder based on the backend.conf.dist file provided.
bucket = "morpheus-infra-backend"
key = "env/staging/charts/state.tfstate"
region = "us-east-1"
  1. Create a terraform.tfvars file in the ./infra/envs/staging/ folder that includes the path to your Kubernetes configuration.
kubeconfig_path = "/home/user/.kube/config"
  1. To apply the Ingress Helm chart (this step should be performed after creating the cluster):
cd ./infra/test/eks-charts
terraform init -backend-config=backend.conf
# write yes to apply changes
terraform apply

Creating secrets

create a file called morpheus-secrets.yaml based on ./infra/tools/k8s/morpheus-secrets.yaml.example. Make sure to update the values with the secrets that are coded in base64.

# To code for example POSTGRES_USER
echo -n "dbpassword" | base64 -w 0
apiVersion: v1
kind: Secret
metadata:
  name: morpheus-secret
type: Opaque
data:
  POSTGRES_USER: XXXXXXX=
  POSTGRES_DB: XXXXXXX=
  POSTGRES_PASSWORD: XXXXXX=
  POSTGRES_HOST: XXXXXXX=
  FIREBASE_PROJECT_ID: XXXXXX=
  FIREBASE_PRIVATE_KEY: XXXXXXX=
  FIREBASE_CLIENT_EMAIL: XXXXXXX=
  AWS_ACCESS_KEY_ID: XXXXXXX=
  AWS_SECRET_ACCESS_KEY: XXXXXXX=
  SENTRY_DSN: XXXXXXX=
  FLOWER_ADMIN_STRING: XXXXXXX
  IMAGES_BUCKET: XXXXXXX
  IMAGES_TEMP_BUCKET: XXXXXXXXX
  MODELS_BUCKET: XXXXXXX

where:

  • apiVersion: This field specifies the version of the Kubernetes API that the secret is compatible with. In this case, the secret is compatible with Kubernetes API version 1.

  • kind: This field specifies the type of object that the manifest represents. In this case, the manifest represents a secret object.

  • metadata: This field contains information about the secret, such as its name and labels. The name field is required and must be unique within the namespace where the secret is created.

  • type: This field specifies the type of secret. In this case, the secret is an Opaque secret. Opaque secrets are used to store sensitive data that should be encrypted at rest.

  • data: This field contains the secret data. The data is stored as a map of key-value pairs. The keys must be unique and the values can be any type of data. The following are the secrets that are stored in the morpheus-secret:

  • POSTGRES_USER: The username for the PostgreSQL database.

  • POSTGRES_DB: The name of the PostgreSQL database.

  • POSTGRES_PASSWORD: The password for the PostgreSQL database.

  • POSTGRES_HOST: The hostname or IP address of the PostgreSQL database server.

  • FIREBASE_PROJECT_ID: The ID of the Firebase project.

  • FIREBASE_PRIVATE_KEY: The private key for the Firebase project.

  • FIREBASE_CLIENT_EMAIL: The client email address for the Firebase project.

  • AWS_ACCESS_KEY_ID: The AWS access key ID.

  • AWS_SECRET_ACCESS_KEY: The AWS secret access key.

  • SENTRY_DSN: The Sentry DSN.

  • FLOWER_ADMIN_STRING: The Flower admin string.

  • IMAGES_BUCKET: The name of the S3 bucket where the Morpheus images are stored.

  • IMAGES_TEMP_BUCKET: The name of the S3 bucket where the Morpheus temporary images are stored.

  • MODELS_BUCKET: The name of the S3 bucket where the Morpheus models are stored.

Apply secrets:

kubectl apply -f morpheus-secrets.yaml

To enable the pulling and pushing of images to your registry, create a secret called regcred for Docker credentials.

# https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=dockeruser --docker-password=xxxxxxxxxxxxxxxxx --docker-email=xxxxxxxxxxxxxxxxxxx

Install nvidia plugin for k8s

Apply the nvidia plugin.

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml

CI/CD configuration

The platform currently uses GitHub Actions for deployment. To integrate this with your custom project, you should set the following secrets in GitHub:

AWS access variables:

  • AWS_ACCESS_KEY_ID: Your AWS access key ID.
  • AWS_SECRET_ACCESS_KEY: Your AWS secret access key.
  • AWS_CLUSTER_NAME: The name of your EKS cluster.
  • AWS_REGION: The AWS region where your EKS cluster is located.

Cloudflare tokens to clear the cache:

  • CLOUDFLARE_API_TOKEN:Your Cloudflare API token.
  • CLOUDFLARE_ZONE_ID:The ID of your Cloudflare zone.

Dockerhub tokens to push and pull images in the deploy process:

  • DOCKER_HUB_TOKEN:Your Dockerhub token.
  • DOCKER_HUB_USER:Your Dockerhub username.

Firebase configuration:

  • FIREBASE_CLIENT_EMAIL:Your Firebase client email address.
  • FIREBASE_PRIVATE_KEY:Your Firebase private key.
  • FIREBASE_PROJECT_ID:Your Firebase project ID.

Other infra configuration:

  • FRONTEND_DOMAIN: Platform domain (Eg. morpheus.com)

Sentry configuration:

  • SENTRY_AUTH_TOKEN:Your Sentry authentication token.
  • SENTRY_ENV: The Sentry environment name.
  • SENTRY_ORG:The Sentry organization name.
  • SENTRY_PROJECT:The Sentry project name.
  • SENTRY_URL:The Sentry URL.

Monorepo configuration:

  • CICD_REPO_PATH: Repo path in the GitHub actions runner. Usually the repo name. E.g. /home/runner/work/Morpheus/Morpheus

How to collaborate

Forking the Repository

  • Go to the Morpheus's GitHub repository.
  • Click on the "Fork" button in the top-right corner of the repository page.
  • This will create a copy of the repository under your GitHub account.

Cloning the Forked Repository

  • On your GitHub account, navigate to the forked repository.
  • Click on the "Code" button and copy the repository URL.
  • Clone the repository locally

Adding an Upstream Remote

Change to the repository directory using cd.

git remote add upstream https://github.com/Monadical-SAS/Morpheus.git

Creating a New Branch for your changes

Create a Pull request

  • Go to the original project's GitHub repository.
  • Create a new PR selecting your forked repository and the branch containing your changes.

Updating Your Fork with Upstream Changes:

  • Fetch the upstream repository changes using the git fetch command.
git fetch upstream
  • Switch to your local main branch using git checkout main and merge the upstream changes into your local main branch using git merge.
git merge upstream/main
  • Push the updated changes to your forked repository on GitHub using git push.
git push origin main

Commit style

To enhance collaboration, we have adopted the conventional commit specification to facilitate the creation of an " explicit commit history." This practice not only assists in achieving a clearer understanding of changes made but also streamlines the release process for Morpheus versions.

Some examples:

  • feat: Implement user authentication feature
  • fix: Resolve issue with data not saving correctly
  • chore: Update dependencies to latest versions
  • docs: Add documentation for API endpoints
  • refactor: Simplify code structure for improved readability
  • test: Add unit tests for new validation logic
  • style: Format code according to style guidelines
  • perf: Optimize database queries for faster performance

For additional information see: conventional commits

morpheus's People

Contributors

afreydev avatar asanchezyali avatar cspencer avatar curibe avatar jpmerc avatar juanarias8 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

morpheus's Issues

integrate DeepFloyd pipeline

DeepFloyd-IF is a pixel-based text-to-image triple-cascaded diffusion model, that can generate pictures with new state-of-the-art for photorealism and language understanding. The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID-30K score of 6.66 on the COCO dataset.

Integrate Google Analytics

  • add Firebase analytics as a context or hook
  • send records each time a user click on a button
  • send records each time a user visits a page
  • send records each time an image is generated
  • send records every time a generation fails

example:

import {
  createContext,
  ReactNode,
  useContext,
  useEffect,
  useState,
} from "react";
import { useRouter } from "next/router";
import { analytics } from "../lib/firebaseClient";
import { logEvent } from "firebase/analytics";

export interface IAnalyticsContext {
  analytics: any;
  cookiesStatus: string;
  setCookiesStatus: (accept: string) => void;
  sendAnalyticsRecord: (type: string, message: any) => void;
}

const defaultState = {
  analytics: undefined,
  cookiesStatus: "",
  setCookiesStatus: () => console.log("setCookiesStatus"),
  sendAnalyticsRecord: () => console.log("sendAnalyticsRecord"),
};

const AnalyticsContext = createContext<IAnalyticsContext>(defaultState);

const FirebaseTrackingProvider = (props: { children: ReactNode }) => {
  const router = useRouter();
  const [cookiesStatus, setCookiesStatus] = useState("");

  useEffect(() => {
    const handleRouteChange = (url: string) => {
      if (cookiesStatus === "accepted" && analytics) {
        return;
      }

      logEvent(analytics, "page_view", {
        page_location: url,
        page_title: document?.title,
      });
    };

    router.events.on("routeChangeStart", handleRouteChange);

    return () => {
      router.events.off("routeChangeStart", handleRouteChange);
    };
  }, [analytics]);

  const sendAnalyticsRecord = (type: string, message: any) => {
    if (cookiesStatus === "accepted" && analytics) {
      logEvent(analytics, type, { message: message });
    }
  };

  return (
    <AnalyticsContext.Provider
      value={{
        analytics,
        cookiesStatus,
        setCookiesStatus,
        sendAnalyticsRecord,
      }}
    >
      {props.children}
    </AnalyticsContext.Provider>
  );
};

const useAnalytics = () => {
  const context = useContext(AnalyticsContext);
  if (context === undefined) {
    throw new Error("useAnalytics must be used within a AnalyticsProvider");
  }
  return context;
};

export { FirebaseTrackingProvider, useAnalytics };

Administrator for models

This will allow the addition or removal of new models and styles through a graphical interface, simplifying the process.

Getting Build Docker Error

I get a docker build error with the Firebase API key, no matter what i can't find its solution

This is the log

.32 Sentry Logger [warn]: No DSN provided, client will not do anything.
83.33 Sentry Logger [log]: Initializing SDK...
83.33 Sentry Logger [warn]: No DSN provided, client will not do anything.
83.33 Sentry Logger [log]: SDK successfully initialized
83.67 FirebaseError: Firebase: Error (auth/invalid-api-key).
83.67 at createErrorInternal (file:///app/node_modules/@firebase/auth/dist/node-esm/totp-e47c784e.js:490:40)
83.67 at _assert (file:///app/node_modules/@firebase/auth/dist/node-esm/totp-e47c784e.js:494:15)
83.67 at Component.instanceFactory (file:///app/node_modules/@firebase/auth/dist/node-esm/totp-e47c784e.js:6506:9)
83.67 at Provider.getOrInitializeService (file:///app/node_modules/@firebase/component/dist/esm/index.esm2017.js:290:39)
83.67 at Provider.initialize (file:///app/node_modules/@firebase/component/dist/esm/index.esm2017.js:234:31)
83.67 at initializeAuth (file:///app/node_modules/@firebase/auth/dist/node-esm/totp-e47c784e.js:2943:27)
83.67 at getAuth (file:///app/node_modules/@firebase/auth/dist/node-esm/totp-e47c784e.js:6567:18)
83.67 at /app/.next/server/chunks/774.js:877:68 {
83.67 code: 'auth/invalid-api-key',
83.67 customData: { appName: '[DEFAULT]' }
83.67 }
83.68
83.68 > Build error occurred
83.68 Error: Failed to collect page data for /imagine/upscaling
83.68 at /app/node_modules/next/dist/build/utils.js:955:15
83.68 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
83.68 type: 'Error'
83.68 }
83.83 error Command failed with exit code 1.
83.83 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

failed to solve: process "/bin/sh -c yarn install && yarn build" did not complete successfully: exit code: 1

Thank you for help!

integrate stable diffusion outpainting pipeline

Stable Diffusion outpainting is a technique that uses a latent diffusion model to extend an image beyond its original borders. This can be used to create large-scale detailed graphics or to extend existing images without limits.

To use Stable Diffusion outpainting, you first need to start with an image. You can use any image, but it is best to use an image that has a clear subject and a well-defined background. Once you have an image, you need to select the area that you want to extend. You can do this by using a brush tool or by drawing a rectangle around the area.

Once you have selected the area, you need to provide a text prompt. The text prompt can be anything you want, but it is best to be specific. For example, if you want to extend the image of a forest, you could provide a text prompt like "a lush green forest with a winding path."

The model will then generate a new image that extends the original image. The new image will be in the same style as the original image, and it will be coherent with the surrounding area.

change landing page description

From: Unleash your imagination and use the power of AI to create the most extraordinary scenarios you can envision.
To: Explore visual AI and easily add new models, extending Morpheus for your own project needs.

update the "generate" form behavior

Screenshot 2023-08-14 at 10 18 57 AM
  • When initially typing in the generate input box, the default text should disappear, allowing the user to type in their prompt without having to erase the example text.

  • “This field is required.” shouldn’t pop up until the user tries to click generate without anything in the textbox.

change features hover description

Screenshot 2023-08-14 at 10 15 44 AM
  • Text-to-image, hover description
    From: Text-to-image uses a diffusion model to translate written descriptions into images, like DALL-E or MidJourney. Results will vary widely depending on the words used in the prompt. If you want some help, click the magic wand icon–this will use a language model to “enhance” your prompt for you. In the settings, you’ll also find a default “negative prompt” that you can modify as needed.
    To: Text-to-image uses a diffusion model to translate written descriptions into images, like DALL-E or MidJourney.

  • Image-to-image, hover description
    From: Image-to-image applies the same class of models as Text-to-Image, but instead of text, you provide an image to serve as the starting point for the model. A common use-case would be providing a hand-drawn image and describing what your sketch should look like when it’s finished.
    To: Image-to-image applies the same class of models as Text-to-Image, but uses an image instead of text for the starting point. A common use-case would be giving the model a hand-drawn picture and a description to dictate the resulting image.

  • Pix2Pix, hover description
    From: Similar to Image-to-image, pix2pix starts with an image you provide, but the model has been trained to take editing instructions in the form of “X to Y”. Check out some examples here.
    To: Similar to Image-to-image, pix2pix starts with an image, but takes further editing instructions in the form of “X to Y”. See more details here.

  • ControlNet, hover description
    From: ControlNet provides precise control over image generation. By incorporating high-level instructions or control signals, it allows users to influence specific aspects of the generated images, such as pose, appearance, or object attributes. See more details here.
    To: ControlNet provides precise control over image generation. By incorporating high-level instructions (or control signals), it allows users to influence specific aspects of the generated images. These can be factors such as pose, appearance, or object attributes. See more details here

  • InPainting, hover description
    From: Inpainting allows you to draw a “mask” over an image, which means the model will only change the section of the image covered by the mask. The model can remove unwanted elements, replace them with something else, or add new elements to an existing image. See more details here.
    To: InPainting allows you to draw a mask over the section of an image you would like changed. The model can remove unwanted elements, replace elements with other content, or add entirely new elements to the image. See more details here.

show the painting app in a modal

  • Open the modal after the user clicks the "paint an image" link
  • Mount the excalidraw painting app inside the modal
  • add a "done" button to close the modal and set the new image in the base image field

Migrate to a plugin-based architecture

This will allow easy addition, modification, or removal of functionalities within the Morpheus codebase. It will also allow external contributors to propose new features as plugins and expose them in the Morpheus plugin marketplace.

add CookiesConsent form

CookiesConset.tsx

import { useEffect } from "react";
import { useAnalytics } from "../../context/AnalyticsProvider";
import { deleteAllCookies } from "../../utils/cookies";
import useLocalStorage from "../../hooks/useLocalStorage";
import styles from "./CookiesConsent.module.scss";

const CookiesConsent = () => {
  const { cookiesStatus, setCookiesStatus } = useAnalytics();

  const [localCookies, setLocalCookies] = useLocalStorage("cp:cookies", null);
  const finalStatus = localCookies || cookiesStatus || "";

  useEffect(() => {
    if (localCookies !== "") setCookiesStatus(localCookies);
  }, [localCookies]);

  const handleActionCookies = (event: any, status: string) => {
    event.preventDefault();

    if (status === "decline") {
      deleteAllCookies();
    }

    setCookiesStatus(status);
    setLocalCookies(status);
  };

  return finalStatus === "" ? (
    <div className={styles.cookiesContainer}>
      <div className={styles.textContainer}>
        <p className="heading white">
          This website uses cookies to ensure you get the best experience on our
          website.
          <a target="_blank" href="">
            Privacy Policy
          </a>
        </p>
      </div>

      <div className={styles.buttonsSection}>
        <button
          className="buttonSubmit"
          onClick={(event) => handleActionCookies(event, "declined")}
        >
          Decline
        </button>

        <button
          className="buttonSubmit"
          onClick={(event) => handleActionCookies(event, "accepted")}
        >
          Accept
        </button>
      </div>
    </div>
  ) : null;
};

export default CookiesConsent;

CookiesConset.module.scss

@import "styles/main";

.cookiesContainer {
  position: fixed;
  bottom: 0;
  left: 0;
  width: 100vw;
  min-height: 50px;
  height: auto;
  z-index: 999;
  display: flex;
  align-items: center;
  justify-content: center;
  background-color: rgba(0, 0, 0, 0.9);
  padding: 20px;

  @include phone {
    flex-direction: column;
    justify-content: flex-start;
  }

  .textContainer {
    width: 70%;
    height: 100%;
    display: flex;
    align-items: center;

    a {
      display: block;
      text-decoration: underline;
    }

    @include phone {
      width: 100%;
      flex-direction: column;
      align-items: flex-start;
    }
  }

  .buttonsSection {
    width: auto;
    margin-left: 24px;
    display: flex;
    justify-content: space-between;
    min-width: 400px;
    padding-right: 24px;

    @include phone {
      width: 100%;
      flex-direction: column;
      margin: 24px 0 0 0;
    }

    button {
      :first-child {
        margin-right: 24px !important;
      }

      @include phone {
        width: 100%;
        margin: 8px !important;
      }
    }
  }
}

/utils/cookies.ts

const deleteAllCookies = () => {
  const cookies = document.cookie.split(";");

  for (let i = 0; i < cookies.length; i++) {
    const cookie = cookies[i];
    const eqPos = cookie.indexOf("=");
    const name = eqPos > -1 ? cookie.substr(0, eqPos) : cookie;
    document.cookie = name + "=;expires=Thu, 01 Jan 1970 00:00:00 GMT";
  }
};

export { deleteAllCookies };

Integrate ray.io as the model serving engine

Ray is a framework for scaling AI and Python applications in general. With this integration, the way models are served locally and in production will be unified, and the serving and scaling of models within the system will be improved

App cannot be mounted or builded at all

Doing the too many changes, either the readme is not updated or anything but based on your current quick start guide of setting up the app, is not possible to make it run at all with your guide.

The automated release is failing 🚨

🚨 The automated release from the afreydev/semantic-release-impl branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the afreydev/semantic-release-impl branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Missing name property in package.json.

The package.json's name property is required in order to publish a package to the npm registry.

Please make sure to add a valid name for your package in your package.json.


Good luck with your project ✨

Your semantic-release bot 📦🚀

Add functionality to share prompt config as a link

  • add an icon to share config as a link in artwork actions
  • build a link with changed prompt parameters (different from defaults) as query params
  • allow to copy the generated link
  • rebuild the shared configuration when the link is open

integrate Stable diffusion video generator pipeline

Stable Diffusion text to video or video generator is a technique that uses a latent diffusion model to generate videos from text prompts. This can be used to create videos that depict a specific scene or event, or to generate videos that are more abstract or artistic. The model is trained on a massive dataset of videos, and it can learn to generate realistic and coherent videos.

To use Stable Diffusion text to video, you first need to provide a text prompt. The text prompt can be anything you want, but it is best to be specific. For example, if you want to generate a video of a car driving down a road, you could provide a text prompt like "a car driving down a winding road in the countryside."

The model will then generate a new video that matches the text prompt. The video will be in the same style as the videos that the model was trained on, and it will be coherent with the text prompt.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.