h8r-dev / stacks Goto Github PK
View Code? Open in Web Editor NEWHeighliner stacks to speedup app dev.
License: Apache License 2.0
Heighliner stacks to speedup app dev.
License: Apache License 2.0
官方对于 KUBECONFIG 环境变量的使用说明:设置 KUBECONFIG 环境变量
本地环境参数:
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/work/ysz-dev
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/ni9ht/k3s-sh
问题描述:
当本地的 KUBECONFIG 环境变量存在多个时,运行 dagger do up ./plans
时进行 kubeconfig 文件获取的时候会报错,错误如下:
[✗] client.commands.kubeconfig 0.0s
10:59AM FTL failed to execute plan: task failed: client.commands.kubeconfig: exit status 1
修改建议:
目前看代码此处获取该环境变量主要用户获取 kubeconfig 文件内容,可以考虑使用自定义的环境变量让用户指定需要部署应用的集群 kubeconfig 文件。
Currently the gin-vue stack has no CI to verify it on pull request. We need to setup CI for it. Here are two ways that we can try and implement:
There is an Action that can run dagger inside Github Action: https://docs.dagger.io/1201/ci-environment
We just need to add a verify step for it.
We can use Github Action to trigger test which actually runs on our self-hosted runner: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
In this way we can have our specific version of binary, environment, etc.
Currently, Infra components are installed together in the Stack, which can cause problems with #204. We plan to split the installation of Infra components from the current Stack; the Infra installation process is responsible for installing Infra components and the Stack only generates the application framework source code and Deploy repository.
Split the Stack into an Infra Stack and an Application Stack, with the Infra Stack being used exclusively for installing Infra components and making the necessary configurations to dynamically install the necessary Infra components based on the incoming parameters. When executing hln up, the Installation Infra Stack is executed first, followed by the Application Stack (e.g. gin-next), and we first support these Infra components:
Application Stack
make them can multiplexing between different applications in the same cluster @92hackersApplication Stack
make them can multiplexing between different applications in the same cluster @92hackersApplication Stack
make them can multiplexing between different applications in the same cluster @92hackersApplication Stack
make them can multiplexing between different applications in the same cluster @yuyicaiApplication Stack
make them can multiplexing between different applications in the same cluster @yuyicaiApplication Stack
make them can multiplexing between different applications in the same cluster @yuyicaiApplication Stack
make them can multiplexing between different applications in the same cluster @lyzhang1999Application Stack
make them can multiplexing between different applications in the same cluster @lyzhang1999When hln up is executed, if the Infra component is not installed, it is installed automatically. If it is installed, it is skipped and execute the Application Stack installation.
When a new application is created in the same cluster, the Infra component will remain intact and only application related content will be created, solving the problem of #204.
Enhance hln init
command, install infra first, then hln up
will run application stack.
When running stack, it will download a big docker image, which will pending a lot of time, leads to bad user experience.
Set Alpine
as base image, every consumer step is responsible for installing it's own dependencies.
Considering stack steps running concurrently, total run time maybe reduced.
There needs a benchmark test to verify.
Target: First time stack running time.
Currently, each stack is written as a Dagger Plan
dagger.#Plan & {
client: {
filesystem: {
"...": read: contents: dagger.#FS
"...": write: contents: actions.up.outputYaml.output
}
commands: kubeconfig: {
name: "cat"
args: ["\(env.KUBECONFIG)"]
stdout: dagger.#Secret
}
env: {
KUBECONFIG: string
...
}
}
actions: {
up: {
outputYaml: output: string
installIngress: {...}
installNocalhost: {
ingressIP: installIngress.output.IP
}
...
}
}
}
The problem with this design is that it is hard to compose Stacks. For example, I want to compose two Stacks, one is deploying serverless apps and another is installing middleware software like nginx, into one stack. Currently there is no way to put two Dagger plans into one Dagger plan.
To make composition of Stacks, we need to redesign a Stack to be a module. We can run a Stack module using hln
, and a Stack can import other Stacks as modules.
Here is what a Stack module would look like:
input: {
env: {
KUBECONFIG: string
}
commands: {
kubeconfig: {
name: "cat"
args: ["\(env.KUBECONFIG)"]
stdout: h8r.#Secret
}
}
files: {
"...": read: contents: h8r.#FS
"...": write: contents: actions.up.outputYaml.output
}
// This is the fields that we will read directly from `app.yaml`
config: {
image: string
deploy: {
cmd: [...string]
port: int
}
}
}
output: {
// This is the fields that we will write to `.hln/output.yaml`
local: {
ingressIP: string
ingressPort: int
ingressHost: string
ingressURL: string
}
}
up: {
installIngress: {
name: input.config.name
}
installNocalhost: {
ingressIP: installIngress.IP
}
}
Basically, a stack is a module that has input
and output
, and does a bunch of stuff under the hood.
When we run hln up
for a stack, it basically renders it into a Dagger Plan. The input
and output
will be rendered into client
sections, and up
will be rendered into actions
sections.
A special case is that we can keep input.config
as is and fill it with fields from app.yaml
.
Let's say you have two stacks with the above format, called serverlessapp
and middleware
. You can compose them in the following way:
import (
"serverlessapp"
"middleware"
)
input: {
serverlessapp.input
middleware.input
}
output: {
url: up.installServerlessApp.output.url
}
up: {
installMiddleware: middleware.up
installServerlessApp: {
wait: installMiddleware.output.ready
up: serverlessapp.up
output: {
url: up.output.url
}
}
}
When we run hln up
for this plan, only the above plan will be rendered to a Dagger plan. Both serverlessapp
and middleware
will be served as modules.
Heighliner manage much resources (only github repos now) with Terraform, which will generate a state file to records resource creation status.
Currently, heighliner will save the state file as a secret in K8S cluster, which leads to result:
As developer, to run a stack, we have to provide a K8S cluster to heighliner, But in situations such as: create github repos, there is no need to set up a K8S cluster, But heighliner required that.
output.yaml
did.prepare for 4/12 release.
As a community developer, i want to customize a new Stack with Heighliner tools, what should i do ?
Currently, all stacks are created by Heighliner team and the source code of which existed in h8r-dev/stacks
GitHub repo.
What developers want is: he want to owns the created stack repo, Heighliner Stacks just an engine to him, he create custom chain components in his own stack repo, import 'h8r-dev/stacks' library, and then build his own stack.
All chain
components used by a stack should be imported directly in plan.cue
file, and then customization part of a stack should reside with plan.cue
file and imported in plan file.
plans/cuelib/scm/github/repo.cue, line 95.
echo
may add \n in file, we can replace with printf
.
Currently, in every stack, a clone of code/
dir contains a backend project named 'go-gin', and a helm project named helm
.
If some file content need to be updated, we have to modify all of these copies.
Keep just one copy of code/
directory. move it outside from Stack dir, When deploying a stack, copy that code/
dir to stack temporarily.
Currently the domain h8r.site
is hardcoded. We should let user specify it. If it is specified, we don't need to change /etc/hosts in that case.
We use Terraform in our stack, and terraform will generate state file, we must save the state file, to make terraform works properly.
Currently, we will save the terraform state file in *K8S Secret, which has size limit of 1 Mb
and will be replaced by other more powerful storage backend in the future.
Terraform supports etcd and consul and other storage backend to store state file, full list can be reached at:
We must specify a helm repo to deploy our apps with stack now, However, Helm repo
is not scope of my business. can you hide it from me ? and there is no need to write a helm repo
section in stack plan.
Regard helm repo
as best practise when deploying apps, generate a help repo
automatically when executing a stack.
Currently, every application project repo created by Heighliner will includes a Github action workflow file called: docker-publish.yml
, which used to build a docker image and push it to Github container registry.
In a real application level project, there are a bunch of works need to do before building the final docker image, such as: lint
, static check
, test
.
So, it's a must, that docker-publish.yml
needs to support it can depends on success of other custom workflows running
.
Github action provides workflow_run
event to handle dependencies between multi workflows, view : https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run for more details.
运行参数
GithubID: Yni9ht
GITHUB_ORG: Yni9ht
ORGANIZATION: Yni9ht
APP_NAME:book-store
问题描述
通过 dagger do up -p ./plans
部署 gin-vue 应用后,应用的后端和前端仓库打包出的镜像地址为 ghcr.io/yni9ht/book-store:main
,但是应用的 Deployment 中镜像地址为ghcr.io/Yni9ht/book-store:main
,导致无法拉取镜像。
The current directory layout is confusing. We need to reorganize them in the following ways:
@92hackers ask me if we can put cuelib directory into stacks, I rethought the possibilities, here is a solution.
A lot of mistakes or wrong configs will lead failed running result of a Stack, and currently there is no way to get what happend when user runnint stack, but which is very important to help up improve Stack.
As maintainer team of Stack, we should import a way to help us seek out why user get failed when running Stack,
We could collect user environment info && error stack || error info, and then send to us.
go bug
is a good example that we can learn from, The user will make decisition wether to report or not.
Currently, we use https://registry.npmmirror.com
as yarn registry in stack, which is very slow when run stacks at outside of China.
In China, set yarn registry as https://registry.npmmirror.com
.
In outside of China, do not set yarn registry, let it be default value.
Currently, Stack will not validate user-input parameters, and if some parameter not satified, Stack will throw out meaningless errors.
Before running stack, validate user inputs, and output more user-friendly error message.
On ci workflow, docker image tag only uses sha values
, not main
.
code
This causes a problem.
When argocd first installs the application, the image tag pulled is main
, but this tag does not actually exist. argocd needs to wait for the next sync to trigger a change to the deployment image tag.
This will take more time to wait for the image to be pulled.
cc @92hackers
All of our usage cases are creating new applications with Heighliner Stacks, How about existed applications of developers ?
du -hs code/go-gin
136M code/go-gin
It's hard to sync such a big chunk of code into newly initialized github repo
create Nocalhost domain for each Devspace, so developer can access Devspace directly without port-forward manually.
Currently, stack will create repo with Terraform
one by one, which can be optimized to be:
one terraform apply
action to create all repos.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.