Giter Site home page Giter Site logo

atlas's People

Contributors

a8m avatar crossworth avatar datdao avatar datdaotest avatar dimagreat avatar dorav avatar edigaryev avatar elad-n avatar ericyd avatar giautm avatar gneyal avatar haaawk avatar haoxins avatar hilakashai avatar jacobwgillespie avatar masseelch avatar noamcattan avatar ofpiyush avatar omarhachach avatar phuurl avatar ronenlu avatar rotemtam avatar sashamelentyev avatar shani-a avatar solomonme avatar svstanev avatar tmc avatar vladdoster avatar yonidavidson avatar zeevmoney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlas's Issues

*postgres.UserDefinedType on Postgres

Hello, first, I know this project is in the early stages, but I think is great, I see a lot of potential on it, great work.

While trying the atlas schema inspect command on a few databases I faced the problem about *postgres.ArrayType, but using the master version I was able to run fine, but now I think I have a problem with hstore.

The error output is:
Error: failed converting schema to spec: failed to convert column type *postgres.UserDefinedType to spec

Listing the types I got only stuff about hstore.

image

Is there a way (or plan) to allow custom types (like hstore, Postgis)?

Roundtrip should not prorpose changes

I ran atlas schema inspect -d "<dsn>" > atlas.hcl, immediately followed by atlas schema apply -d "<dsn>" -f atlas.hcl and the tool wanted to run some changes on my DB.

Example of the generated SQL commands:

-- Modify "jobs" table
ALTER TABLE `mipo`.`jobs` DROP INDEX `info`
-- Modify "jobs" table
ALTER TABLE `mipo`.`jobs` ADD INDEX `info` (`info`)

MySQL 5.7

Support for partial indexes?

We have a use case (Using Ent) where we have two optional edges which are mutually exclusive, yet need unique indexes. We are currently creating these indexes using custom SQL migrations in with https://github.com/rubenv/sql-migrate, but we'd like to move to Atlas.

Example:

func (TravelPinnedSegment) Fields() []ent.Field {
	return []ent.Field{
		...
		field.String("from_country_id").MaxLen(3).NotEmpty().Comment("from country edge field").Optional().Nillable(),
		field.String("to_country_id").MaxLen(3).NotEmpty().Comment("to country edge field").Optional().Nillable(),
		field.String("from_region_id").MaxLen(6).NotEmpty().Comment("to country edge field").Optional().Nillable(),
		field.String("to_region_id").MaxLen(6).NotEmpty().Comment("to country edge field").Optional().Nillable(),
	}
}

Indexes that we want to handle in Atlas:

Indexes:
    "travel_pinned_segments_country_edges" UNIQUE, btree (from_country_id, to_country_id, (from_region_id IS NULL AND to_region_id IS NULL)) WHERE to_region_id IS NULL AND from_region_id IS NULL
    "travel_pinned_segments_region_edges" UNIQUE, btree (from_region_id, to_region_id, (from_country_id IS NULL AND to_country_id IS NULL)) WHERE to_country_id IS NULL AND from_country_id IS NULL

proposal: support workspaces/environments

Atlas Workspaces (initial version)

Hey all! We just started to work on the initial version of Atlas workspaces and I wanted to summarize it here and hear your thoughts about it before we have major progress on it.

Motivation

There are a few limitation today in Atlas that we want to address:

  • Support maintaining an "Atlas schema" on multiple files
  • Allow applying the same "Atlas schema" on multiple environments. e.g. multi-tenant deployment or multiple environments.
  • Support input variables

Proposal

Make it possible to group one or more HCL files as an "Atlas schema" and allow interact with them as they were a single entity. An "Atlas schema" represents one or more database schemas (named-databases), that may be logically connected through foreign-keys and can be managed with the same connection (established with the provided url option).

Note: In the future, we would like to add support to manage multiple databases in the same project. This is the first stage to get there, and we'll publish a proper design for it.

An "Atlas schema" that spans on one or more files and uses the new format can be used as a "schema template" (~similar to terraform modules). That is, it accepts input variables and allow to use them in the various resources. A rough example:

variable "tenant_name" {
  type = string
}

schema "tenant" {
  name = var.tenant_name
}

table "users" {
  schema = schema.tenant
  ...
}

view "t" {
  schema = schema.tenant
  as     = file("t.sql")
}

We also introduce a new file named atlas.hcl that holds a global configuration of the project, e.g. your credentials to GCP or AWS, and a set of blocks describe the environments (or workspaces) the different schemas are applied on. The structure is not final, but here's a taste of what we thought about.

1. Allow working with the same schema on multiple environments

aws {
  credentials = "..."
}

env "dev" {
  url     = "<dev-url>"
  dev     = "<ariga-url>" 
  schema  = "./dir"
}

env "prod" {
  url    = "<prod-url>"
  schema = "./dir"
}

Then, the interaction with the database will look as follows:

atlas schema inspect dev
atlas migrate apply prod

2. Accept input variables in atlas.hcl

variable "tenant" {
  type = string
}

variable "user" {
  type    = string
  default = "root"
}

variable "password" {
  type = string
}

env "project1" "prod" {
  url     = "mysql://root:${var.password}..."
  schema  = "./dir1"
  
  // Inject input variables.
  tenant_name = var.tenant
}

env "project2" "prod" {
  url     = "mysql://root:${var.password}..."
  schema  = "./dir2"
  
  // Inject input variables.
  tenant_name = var.tenant
}

Then, the interaction with the database will look as follows:

atlas migrate apply project1 prod -var="tenant=..." -var="..." 

There are many similarities with the Terraform ecosystem, so I expect we'll use some of its infrastructures.

Support schema inspect for Mysql table with column "Timestamp/Datetime default on update"

We have a column that is in Datetime type and has the default on update definition. For example:

CREATE TABLE `foo` (
 `id`			INT,
 `update_time`	DATETIME(6)  NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6)

  PRIMARY KEY (`id`),
)

When I try to inspect the schema, I got the following error:

atlas schema inspect -d "mysql://root:secret@tcp(localhost:12334)/my_database"
Error: mysql: unknown attribute "on update current_timestamp(6)"

Mysql version: 5.7.35

Atlas version:

atlas CLI version v0.3.1
https://github.com/ariga/atlas/releases/tag/v0.3.1

Unsupported MySQL 8 Virtual Generated Columns

In a MySQL 8 database schema with a table that contains following:

CREATE DATABASE example;

CREATE TABLE example.test (
  id int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
  test_a varchar(64) NOT NULL,
  test_b varchar(64) NOT NULL,
  test_c varchar(131) GENERATED ALWAYS AS (concat(test_a, ' - ', test_b)) VIRTUAL
) ENGINE=InnoDB;

On running schema inspect

atlas schema inspect -d "mysql://root:root@tcp(localhost:3306)/example" > example.hcl

I receive following error:

Error: mysql: unknown attribute "VIRTUAL GENERATED"

Support all needed sqlspec elements

In the context of completing the sqlspec extraction out of schemaspec, and replacing sql/schema/schemaspec with schema/schemaspec.

All of the SQL-specific structs are moving and need to be rewritten in terms of the new schemaspec package so we can deprecate the old one. The task is to complete the missing structs (PrimaryKey, ForeignKey, Index). Notice that Overrides are currently not supported and can be skipped. (We will add support for it separately).

Also some constants related to constraints have been dropped for now.

I have an idea of how to introduce a mechanism in the extension code to automatically load schemaspec.Ref to a ColumnRef or TableRef but it's not ready. For now, we'll just live with Ref instances and have a function in sqlspec to read them where needed.

TODO

Support for PostGIS SQL Types

Hello, and thanks for putting together a great tool!

Is there support planned for PostGIS SQL Types for Postgres?

A significant portion of PostgreSQL users use PostGIS, including us. And I think this tool can add a lot value given is simplicity and ability to read existing schemas.

Postgres schema selection

Hi,
Is there a way to define what schema I want it to inspect/migrate ?
From here it seems is hardcoded to only use public schema. The project I'm working on uses 3 different schemas and public is not used at all. Running inspect on my database gives me an empty result.

Confusing mixture of CLI --schema flag and DSN

Hey folks.

came across this behavior (which IMO is confusing) while trying to create a new Schema(DB) using the Atlas CLI

given my Realm(MariaDB Instance) already contains other schemas and the new schema hcl is

table "table" {
 schema = schema.example
 column "column_1" { type = text }
 column "column_2" { type = text  }
}
schema "example" {}

while doing:

โฏ atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/example' -f example.hcl
Error: mysql: query system variables: Error 1049: Unknown database 'example'

we receive an error from mysql, this makes sense since the first thing atlas does it to try and connect to the realm using the DSN. example scheme doesn't exists so we get an error.

so I went and tried using the following (apply change on the realm level)

โฏ atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' -f example.hcl
-- Planned Changes:
-- Drop schema named "some_db_1"
DROP DATABASE `some_db_1`
-- Drop schema named "some_db_2"
DROP DATABASE `some_db_2`
-- Add new schema named "example"
CREATE DATABASE `example`
-- Create "table" table
CREATE TABLE `example`.`table` (`column_1` text NOT NULL, `column_2` text NOT NULL)

this is clearly not what I intended as a user. yet makes sense since atlas was instructed to apply on the realm level, and my example.hcl contains nothing regarding some_db_*
this is also potentially dangerous because users can make mistakes like this

in order to successfully apply and create I decided to work on the realm level and scope the operation using the --schema flag.

โฏ atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' -s example -f example.hcl
-- Planned Changes:
-- Add new schema named "example"
CREATE DATABASE `example`
-- Create "table" table
CREATE TABLE `example`.`table` (`column_1` text NOT NULL, `column_2` text NOT NULL)

Bazinga! this is what I, as a user, intended to do.

some thoughts

  1. I'm accustomed to other tools like mysql/psql CLIs which usually works like mysql -hlocalhost -uroot -dDBNAME
  2. IMO schema should be not part of the DSN, as a user, I will probably always decide to use DSN without schema name and explicitly using the schema flag if scoping is needed. feels safer and explicit
  3. but, if schema is part of the DSN then the CLI shouldn't allow mixing with with the schema flag.
# allow these:
atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/db1'             # one schema
atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' -s db1,db2,db3 # multiple schemas
atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/'                # entire realm
# but not this:
atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/db1' -s db2,db3  # which can also throw an error if db1 does not exists, even if db2/3 does.
  1. mabye add a --i-acknowledge-im-doing-stuff-on-the-entire-realm flag when no schema is provided and/or change the confirm order
? Are you sure?:
โ–ธ Abort  # make abort the default. or even change it to terraform style to an explicit "apply" input is required
  Apply

when all those said, user errors will still happen. :) yet, the CLI can be a bit more protective and less implicit behavior

Opinons?

Capture database extensions (e.g. pgcrypto) with `inspect`

Type

Bug

System

OS: Mac OSX 11.6.3
DB: Postgres 12.8

Details

atlas schema inspect does not capture database extensions. These are necessary for accurate replication of the database state. (Note: I'm not sure if non-Postgres DBs call these features "extensions" but hopefully the example below makes it clear)

Example: pgcrypto is a database extension which exposes various cryptographic functions. Importantly, it is required for any columns that use a UUID as the default value with the gen_random_uuid() function.

In standard SQL, a user would create the extension like this:

create extension if not exists "pgcrypto";

Encapsulated steps to reproduce

# create database and log in
createdb atlas_test -U postgres
psql atlas_test -U postgres

# create extension, then create DB with a default column value of a random UUID
atlas_test=# create extension if not exists "pgcrypto";
atlas_test=# create table test_uuid (id text default gen_random_uuid(), name text);

# insert a record and query it to verify the UUID gen worked
atlas_test=# insert into test_uuid (name) values ('test');
atlas_test=# select * from test_uuid;
atlas_test=# exit

# inspect schema and save to file
atlas schema inspect -d "postgres://postgres@localhost/atlas_test?sslmode=disable" > atlas_test.hcl

# drop DB so the schema can be applied with Atlas
dropdb atlas_test -U postgres
createdb atlas_test -U postgres

atlas schema apply -d "postgres://postgres@localhost/atlas_test?sslmode=disable" -f atlas_test.hcl

Error: create "test_uuid" table: pq: function gen_random_uuid() does not exist

The required default function does not exist because the extension was not saved to the schema and therefore not re-applied

Add auto increment with identity option for Postgres

After scouring issues and the docs, there does not seem to be an option for auto-incrementing PK identity columns. Take for example the following table:

CREATE TABLE todos
(
    id          BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    description VARCHAR NOT NULL DEFAULT '',
    completed   BOOL    NOT NULL DEFAULT FALSE,
    user_id     INTEGER NOT NULL
);

Once applied, running atlas schema inspect -d "postgres://some:user@someserver/somedatabase" returns the following HCL:

table "todos" {
  schema = schema.public
  column "id" {
    null = false
    type = bigint
  }
  column "description" {
    null    = false
    type    = character_varying
    default = ""
  }
  column "completed" {
    null    = false
    type    = boolean
    default = false
  }
  column "user_id" {
    null = true
    type = integer
  }
  primary_key {
    columns = [table.todos.column.id, ]
  }
}
schema "public" {
}

Dropping the table and re-applying the schema via an atlas schema apply re-creates the table, but without an auto-incrementing ID. Looking at the generated DDL:

-- auto-generated definition via DataGrip
create table todos
(
    id          bigint                                not null
        primary key,
    description varchar default ''::character varying not null,
    completed   boolean default false                 not null,
    user_id     integer
);

Inserts now have to be written in the form:

INSERT INTO todos (id, description, completed, user_id)
VALUES ((SELECT MAX(id) + 1 FROM todos), 'add auto-increment for postgres', false, 1)
-- Or find the last value in a manually defined sequence for the PK column

There seems to be an auto-incrementing option for MySQL, any plans for bringing that over for Postgres?

Multiple definition files

Hello, Awesome work with atlas!

I was wondering if you guys could consider the possibility of splitting the schema into multiple .atlas.hcl files and use a glob like syntax in the cli, or have a master file that we can import other files.

This would help organising the tables and definitions in a neater way

Add option to connect to database without SSL (fix "SSL is not enabled on the server" error)

Type

Bug

System

OS: Mac OSX 11.6.3
DB: Postgres 12.8

Details

Installation went smoothly, but when I tried to inspect my existing local DB schema, I got an error

> atlas schema inspect --dsn "postgres://eric@localhost:5432/my_db"
Error: postgres: scanning system variables: pq: SSL is not enabled on the server

A quick search led me to this SO answer.

It appears that Atlas is attempting to connect with the raw DSN, without specifying an SSL mode (it appears the SSL is enabled by default):

db, err := sql.Open("postgres", url)

Perhaps this is necessary in some cases but for local DB inspection, probably not. The tests appear to get around this by explicitly disabling SSL:

db, err := sql.Open("postgres", fmt.Sprintf("host=localhost port=%d user=postgres dbname=test password=pass sslmode=disable", port))

I verified this fix by running Atlas with the ?sslmode=disable query param added to my DB url, and it worked

> atlas schema inspect --dsn "postgres://eric@localhost:5432/my_db?sslmode=disable"

๐ŸŽ‰ prints a schema to terminal

I am not well versed in Go to propose a specific solution but I think the most flexible option would be to add flag option for SSL mode (and perhaps default to false for inspection, since I would imagine inspecting a local DB is the most common initial step)

Reasons for using HCL ?

Quite curious about the potential uses cases with Atlas...

Is one of the reasons for the choice of HCL to do with being able to diff many HCL's and produce new one?

When HCL is used with Terraform, you need a central place for HCL that holds the existing state of the system.

Normally a developer would create a migration script as they work.
Alternatively, they can also diff two datebase's later to produce a migration script.

I ask this because i am facing this issue many databases at the moment. The databases are NOT controlled by a DBA or DEV; but have new data schemas being developed at runtime by users, and so using HCL provides a way to determine the difference between the actual schema of the database and the intended one.

Also JSON Schema is used by the User to describe what they want the DB to model. These JSON Schema are then translated at runtime into DB Migration scripts. So i could then instead create a HCL script and then compare that against the actual DB, and then find the diff of what needs to be applied at runtime. Very useful.

gosec showing various errors.

I dont know if this is checked somewhere else because there is no makefile, but i found the following.

go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest

golangci-lint run -E gosec -E sqlclosecheck ./... --skip-dirs-use-default
sql/internal/sqlx/twin.go:9:2: G501: Blocklisted import crypto/md5: weak cryptographic primitive (gosec)
        "crypto/md5"
        ^
sql/internal/sqlx/twin.go:113:64: G401: Use of weak cryptographic primitive (gosec)
        return fmt.Sprintf("%s_%x", twin[:t.MaxNameLen-1-md5.Size*2], md5.Sum([]byte(twin)))
                                                                      ^
sql/sqlite/migrate.go:415:19: Close should use defer (sqlclosecheck)
                err = rows.Close()
                                ^
cmd/action/fmt_test.go:144:9: G306: Expect WriteFile permissions to be 0600 or less (gosec)
                err = os.WriteFile(file, []byte(contents), 0644)
                      ^
sql/migrate/migrate.go:263:9: G306: Expect WriteFile permissions to be 0600 or less (gosec)
        return os.WriteFile(filepath.Join(d.dir, name), b, 0644)
               ^
cmd/action/internal/update/update.go:149:9: G306: Expect WriteFile permissions to be 0600 or less (gosec)
        return ioutil.WriteFile(fileLocation(path), b, 0644)
               ^
make: *** [lint] Error 1

Windows installation issues

Hi,
First of all, great project. Excited to see where it goes.

My issues with the Windows installation:

  1. Running curl -LO https://release.ariga.io/atlas/atlas-windows-amd64-v0.3.0.exe on Windows 10 and higher does not work because curl is an alias to PowerShell's Invoke-WebRequest cmdlet and it doesn't have the -L flag.
  2. Downloading https://release.ariga.io/atlas/atlas-windows-amd64-v0.3.0.exe directly causes multiple Defender warnings. Attached screenshot of the last warning.

Capture

Thanks,

web super slow to spawn, no auto-retry, no progress bar?

version:

atlas CLI version v0.3.1
https://github.com/ariga/atlas/releases/tag/v0.3.1

because the database in RDS, it usually took 5s to execute any command from Indonesia (My PC) to US (RDS):

atlas schema inspect --dsn "postgres://ro_user:[email protected]:5432/x?sslmode=disable"  -w
Atlas UI available at: http://127.0.0.1:5800/projects/25769803777/schemas/1
Press Ctrl+C to stop
error checking schema sync status: context deadline exceeded
error checking schema sync status: postgres: querying "foo_items" indexes: context deadline exceeded

there's about only 50 tables there.

took a while (like 5 minutes?) before the web spawned, but after spawned, got loading for about a minute, then it shows:

details: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data

Fails to run on Ubuntu 18.04

I'm trying to run Atlas on Ubuntu 18.04 and I'm getting this error after following the Linux setup instructions:

$ curl -LO https://release.ariga.io/atlas/atlas-linux-amd64-v0.3.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 55.0M  100 55.0M    0     0  51.4M      0  0:00:01  0:00:01 --:--:-- 51.4M
$ sudo install -o root -g root -m 0755 ./atlas-linux-amd64-v0.3.1 /usr/local/bin/atlas
$ atlas
atlas: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by atlas)
$ which -a atlas
/usr/local/bin/atlas
$ 

System version info:

lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.6 LTS
Release:	18.04
Codename:	bionic

My glibc:

$ apt-cache policy libc6
libc6:
  Installed: 2.27-3ubuntu1.4
  Candidate: 2.27-3ubuntu1.4
  Version table:
 *** 2.27-3ubuntu1.4 500
        500 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.27-3ubuntu1.2 500
        500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
     2.27-3ubuntu1 500
        500 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic/main amd64 Packages

-f didn't work

the docs on "Management UI" section said:

atlas schema apply -d "mysql://root:pass@tcp(localhost:3306)/example" -f atlas.hcl -w

but when executed:

Error: unknown shorthand flag: 'f' in -f

version:

atlas CLI version v0.3.1
https://github.com/ariga/atlas/releases/tag/v0.3.1

Support for Inverted Indexes

Don't see a way to create something like an inverted index in the docs, perhaps the DDL could be extended with an optional index type?

Support for partitioned table

Hey team,

I was wondering if you had on your radar "Table Partitioning" https://www.postgresql.org/docs/14/ddl-partitioning.html
It's currently blocking me a bit and would love it โ˜บ๏ธ

Following postgres example

BY RANGE

table "measurement" {
  schema = schema.public
  column "city_id" {
    type = uuid
    null = false
  }
  column "logdate" {
    type    = timestamp_without_time_zone(6)
    default = sql("now()")
    null    = false
  }
  
  partition {
    by {
      range = column.logdate
    }
  }
}

=> 

CREATE TABLE measurement (
    city_id         int not null,
    logdate         date not null,
) PARTITION BY RANGE (logdate);

BY hash

table "measurement" {
  [...]
  partition {
    by {
      hash = column.logdate
    }
  }
}

BY list

table "measurement" {
  [...]
  partition {
    by {
      list = column.logdate
    }
  }
}

Alternatives

table "measurement" {
  [...]
  partition {
    by_range = column.logdate
  }
}

# OR

table "measurement" {
  [...]
  partition_by_range = column.logdate
}

Thanks

Make all Extensions Resources

Here's a question I'm struggling with. Currently we have the "base" Resource struct:

Context

type Resource struct {
		Name     string
		Type     string
		Attrs    []*Attr
		Children []*Resource
}

and the Extension interface. Which I suggest in #144 will become:

type Extension interface {
	ext()
}

The Resource struct has two relevant methods:

// Scan reads the Extension into the Resource. Scan will override the Resource
// name or type if they are set for the extension.
func (r *Resource) Scan(ext Extension) error {
  // ...
}
// As reads the attributes and children resources of the resource into the target Extension.
func (r *Resource) As(target Extension) error {
  // ...
}

The problem

The way we are designing the DDL is that a user might describe not-known-beforehand properties on a resource (for instance annotations for another extension), and we would like to not lost this information. For instance:
Suppose we had an extension:

type User struct {
  Id `spec:",name"`
  Active bool `spec:"active"`
}

And an HCL document that creates one:

user "rotemtam" {
  active = true
  hobby "guitar" {
    started_at_year = 1998
  }
}

When parsing the HCL into a schemaspec.Resource we retain the hobby information in a child Resource, but when we read it into a User struct that info will be lost.

Solution

The direction I'm thinking about is to reverse things and make all Extensions embed a Resource that will be used to store extra children's resources and attributes such that they are not lost. Something like:

type Extension interface {
  // Extra returns a *Resource representing any extra children and attributes.
  Extra() *Resource
}

type DefaultExtension struct {
  extra Resource
}

// Extra implements the Extension interface.
func (d DefaultExtension) Extra() *Resource {
  return &d.extra
}

And implementations would be:

// Column holds a specification for a column in an SQL table.
	type Column struct {
		Name     string                   `spec:",name"`
		Null     bool                     `spec:"null" override:"null"`
		TypeName string                   `spec:"type" override:"type"`
		Default  *schemaspec.LiteralValue `spec:"default" override:"default"`
		schemaspec.DefaultExtension
	}

That way As and Scan still receive the Extension interface but can push/pull the extra attributes somewhere that so they aren't lost.

Postgres Reverse change missing comma

I noticed as part of using the new ent versioned migrations that all of the down scripts with column modifications were not adding commas in correctly

Up generates correctly e.g.

ALTER TABLE "subscriptions" ALTER COLUMN "created_at" SET DEFAULT CURRENT_TIMESTAMP, ALTER COLUMN "updated_at" SET DEFAULT CURRENT_TIMESTAMP;

Down has missing commas between each column change

ALTER TABLE "subscriptions" ALTER COLUMN "created_at" DROP DEFAULT ALTER COLUMN "updated_at" DROP DEFAULT;

The version set does not quite work

I tried to build from the source, but could not get the version working.

how to reproduce

  • untar https://github.com/ariga/atlas/archive/refs/tags/v0.3.1.tar.gz
  • run go build -ldflags="ariga.io/atlas/cmd/action.version=0.3.1" ./cmd/atlas
  • got -development when running atlas version

build log as below

$ go build -ldflags="-s -w -X ariga.io/atlas/cmd/action.version=0.3.1" ./cmd/atlas
$

$ ./atlas version
atlas CLI version - development
https://github.com/ariga/atlas/releases/tag/latest

apply error

I use go get ariga.io/atlas/cmd/atlas install, try a simple modify. get this error:

Error: modify "users" table: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CHARSET utf8 COLLATE utf8_general_ci' at line 1
ALTER TABLE `test`.`users` ADD COLUMN `password` varchar(100) NOT NULL DEFAULT "" COMMENT "11" CHARSET utf8 COLLATE utf8_general_ci

Support for Covering Indexes

The docs seem to imply that indexes can include columns, but I don't see a way to use covering indexes, which we use a lot.

Maybe the DDL for indexes could be extended with a second column list "storing" or something of the sort?

Atlas looks great.

Can I specify the index type?

Hello,
I'm trying to use atlas with PostgreSQL, but I couldn't figure out how to specify the GIN index in hcl.

For example, inspecting with the following DDL,

CREATE TABLE public.t_article (
    id integer NOT NULL,
    tags text[],
    document text,
    author text,
    dt timestamp without time zone
);
CREATE INDEX idx_tags ON public.t_article USING gin (tags);

The following hcl will be output

table "t_article" {
  schema = schema.public
  column "id" {
    null = false
    type = integer
  }
  column "tags" {
    null = true
    type = sql("text[]")
  }
  column "document" {
    null = true
    type = text
  }
  column "author" {
    null = true
    type = text
  }
  column "dt" {
    null = true
    type = timestamp_without_time_zone(6)
  }
  primary_key {
    columns = [table.t_article.column.id]
  }
  index "idx_tags" {
    columns = [table.t_article.column.tags]
  }
}

However, if apply it again, it will change to a btree index.

$ atlas schema apply --dsn "postgres://..." -f schema.hcl --dry-run
-- Planned Changes:
-- Drop index "idx_tags" from table: "t_article"
DROP INDEX "public"."idx_tags"
-- Create index "idx_tags" to table: "t_article"
CREATE INDEX "idx_tags" ON "public"."t_article" ("tags")

Is it possible to specify the index type in hcl?

Default value migration

Great project.

Default value of string like field can be empty string "" or "null", So why check it like this?

// ValidString reports if the given string is not null and valid.
func ValidString(s sql.NullString) bool {
return s.Valid && s.String != "" && strings.ToLower(s.String) != "null"
}

I'm not an expert of this, any special consideration?

edit:
After test with my own project and as @a8m mentioned in comment, the function is not the problem.

Generate SQL for schema alterations

How can we get the generated SQL for each schema change?

I want to use Atlas for merging/diffing diffrent schemas, but not applying the changes directly to the DB. I want the generated migration SQL

Support for PostgreSQL 9

Hey all!

Is there any chance to add support for older versions of PostgreSQL (like 9.x)?

I'm willing to send a PR if the possibility exists :)

And keep up the good work!

Support for permissions

Hey,
(it's me again)

A cool feature that is more or less tied to schema is the User's permissions.
Right now, we can create table and such, but still needs to run granting manually which is a bit error prone.

Minimal API that could fit most DBs:

user "api" {
  password = "blabla"
}

table "users" {
  schema = schema.public
  column "id" {
    type = int
  }

  permission {
    user = user.api
    {SELECT | INSERT | UPDATE | DELETE | TRUNCATE } = boolean
  }
  permission {
    user = "my_user_created_outside_atlas"
    INSERT = false
    UPDATE = false
    DELETE = false
    TRUNCATE = false
  }
}

Produces

CREATE USER api WITH PASSWORD 'blabla';
GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE ON TABLE public.users TO "api";
GRANT SELECT ON TABLE public.users TO "my_user_created_outside_atlas";

There are plenty more of options, but it could be great and somewhat simple start.
Best regards,

Add support to ArrayType on Postgres

My schema contained a table with a field of type ARRAY:
special_features "ARRAY" or text[]

When I ran the inspect command:
atlas.exe schema inspect -d "postgres://postgres:******" >> film.hcl

I got the following error:
Error: failed converting schema to spec: failed to convert column type *postgres.ArrayType to spec.

Just to double check, I cloned my DB, removed the ARRAY field and ran the inspect command and it worked.

Add "auto approve" option to `schema apply` command

Type

Feature request

Background

My interest in Atlas is to streamline our CI flow. We currently prepare a database in our CI environment and run migrations sequentially before our tests run. This takes about 25-30 seconds whereas atlas schema apply takes a fraction of a second. However, for this to work in a CI environment, there needs to be a command line option to "auto-approve" the schema application.

Details

  • Add a flag to the schema apply command that approves the schema application without prompting for user input

Make the library/cmd separate module

Currently when atlas is installed as a library it brings a whole bunch of packages with it (cobra -> viper -> rest of the internet) unnecessary for the library usage.

It'd be nice if the library could be kept separate so that the amount of dependencies would be kept at a minimum.

Atlas CLI to give insights before applying changes to FK

I've followed the tutorial here https://blog.ariga.io/meet-atlas-cli/

Then I went and modified the column type under the manager_id field (which is a foreign key) to:

  column "manager_id" {
    null = false
    type = "string"
  }

and ran : atlas schema apply -d "mysql://root:pass@tcp(localhost:3306)/example" -f atlas.hcl

got ALTER TABLE example.users MODIFY COLUMN manager_id varchar(255) NOT NULL ;

The CLI allows to apply this change but it shouldn't since once applying I got this

Error: alter table: Error 3780: Referencing column 'manager_id' and referenced column 'id' in foreign key constraint 'manager_fk' are incompatible.

Is it possible to get denial/feedback before applying if this can be derived beforehand?

can not create default NULL with timestamp for mysql

mysql need this SQL for timestamp type:

CREATE TABLE `test` (
  `id` int(11) NOT NULL,
  `update_time` timestamp NULL DEFAULT NULL
) 

without DEFAULT NULL, it will not work.

atlas can not handle this:

table "test" {
	schema  = schema.test
	column "id" {
		type = int
		auto_increment = true
		null = false
	}
	column "update_time" {
		type = timestamp
		default = null
		null = true
	}
	primary_key {
		columns = [column.id]
	}
}
schema "test" {
	charset = "utf8mb4"
	collate = "utf8mb4_unicode_ci"
}

error:

Error: schemahcl: failed decoding: schemahcl: unsupported type "cty.DynamicPseudoType"

Support schema apply for Mysql table with column "Timestamp/Datetime default with on update"

The schema apply doesn't seem to support datetime/timestamp columns that have default and on update with SQL expression:

For example, with the following schema hcl:

schema "foo" {}

table "bar" {
  schema    = schema.foo
  column "id" {
    null = false
    type = int
  }
  column "create_time" {
    null    = false
    type    = datetime
    default = "CURRENT_TIMESTAMP(6)"
  }
  column "update_time" {
    null    = false
    type    = datetime
    default = "CURRENT_TIMESTAMP(6)"
    on_update = "CURRENT_TIMESTAMP(6)"
  }
  primary_key {
    columns = [table.bar.column.id, ]
  }
}
โฏ atlas schema apply -d "mysql://root:secret@tcp(localhost:55237)/foo" -f ./foo.hcl
-- Planned Changes:
-- Create "bar" table
CREATE TABLE `foo`.`bar` (`id` int NOT NULL, `create_time` datetime NOT NULL DEFAULT "CURRENT_TIMESTAMP(6)", `update_time` datetime NOT NULL DEFAULT "CURRENT_TIMESTAMP(6)", PRIMARY KEY (`id`))
โœ” Apply
Error: create "bar" table: Error 1067: Invalid default value for 'create_time'

There are two problems:

  1. CURRENT_TIMESTAMP(6) shouldn't be quoted as a string.
  2. OnUpdate part shouldn't be omitted

If this is actually supported, what's the correct way to define it in hcl? The document doesn't seem to mention this use case.

Mysql version: 5.7.35

Atlas version:

atlas CLI version v0.3.1
https://github.com/ariga/atlas/releases/tag/v0.3.1

Panic when inspecting pg

Hi team,

Saw the post on HN and wanted to try the tool.
After reading the doc (could be cool to have a simple brew install), got to this step:

atlas schema inspect -d "postgres://postgres:<pwd>@localhost:26432/postgres?sslmode=disable" -w

But the process crashes with the error panic: inconsistent map element types

Click to see stacktrace
panic: inconsistent map element types (cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "default":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "null":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "type":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "foreign_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_delete":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_update":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "ref_columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "index":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "desc":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "expr":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "unique":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "primary_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "schema":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}) then cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "default":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "null":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "type":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "foreign_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_delete":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on_update":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "ref_columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "index":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "on":cty.Object(map[string]cty.Type{"__ref":cty.String, "column":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "desc":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)}), "expr":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}), "unique":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "primary_key":cty.Map(cty.Object(map[string]cty.Type{"__ref":cty.String, "columns":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})})), "schema":cty.CapsuleWithOps("lit", reflect.TypeOf(schemaspec.LiteralValue{V:""}), &cty.CapsuleOps{GoString:(func(interface {}) string)(nil), TypeGoString:(func(reflect.Type) string)(nil), Equals:(func(interface {}, interface {}) cty.Value)(nil), RawEquals:(func(interface {}, interface {}) bool)(nil), ConversionFrom:(func(cty.Type) func(interface {}, cty.Path) (cty.Value, error))(0x44b6620), ConversionTo:(func(cty.Type) func(cty.Value, cty.Path) (interface {}, error))(nil), ExtensionData:(func(interface {}) interface {})(nil)})}))

goroutine 1 [running]:
github.com/zclconf/go-cty/cty.MapVal(0xc000398ec8)
	/Users/runner/go/pkg/mod/github.com/zclconf/[email protected]/cty/value_init.go:207 +0x47d
ariga.io/atlas/schema/schemaspec/schemahcl.blockVars(0xc0004b9b80, {0x0, 0x0}, 0xc000336c00)
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/context.go:87 +0x253
ariga.io/atlas/schema/schemaspec/schemahcl.evalCtx(0xc00000e180, 0xc000239bc0)
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/context.go:41 +0xc5
ariga.io/atlas/schema/schemaspec/schemahcl.decode(0x7440108, {0xc0003e2a80, 0x63b4, 0x6a80})
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/hcl.go:76 +0x145
ariga.io/atlas/schema/schemaspec/schemahcl.(*state).UnmarshalSpec(0xc000062900, {0xc0003e2a80, 0x63b4, 0x6a80}, {0x4ca9de0, 0xc000296140})
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/schema/schemaspec/schemahcl/hcl.go:55 +0x5d
ariga.io/atlas/sql/postgres.UnmarshalSpec({0xc0003e2a80, 0x63b4, 0x6a80}, {0x656d3a0, 0xc0001b0de0}, {0x4e08660, 0xc0002960f0})
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/sql/postgres/sqlspec.go:39 +0x9b
ariga.io/atlas/sql/postgres.glob..func1({0xc0003e2a80, 0xc0003dc000, 0x63b4}, {0x4e08660, 0xc0002960f0})
	/Users/runner/go/pkg/mod/ariga.io/[email protected]/sql/postgres/sqlspec.go:97 +0x3f
ariga.io/atlantis/ent/schema.validateSchema.func1({0x658e9d0, 0xc000336390}, 0xc000148630)
	/Users/runner/work/ariga/ariga/atlantis/ent/schema/atlasschema.go:98 +0x3d3
ariga.io/atlantis/ent/hook.AtlasSchemaFunc.Mutate(0x6570dc0, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/work/ariga/ariga/atlantis/ent/hook/hook.go:22 +0x49
ariga.io/atlantis/ent/hook.If.func1.1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/work/ariga/ariga/atlantis/ent/hook/hook.go:229 +0x89
entgo.io/ent.MutateFunc.Mutate(0xc000698240, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
ariga.io/atlantis/ent/runtime.init.0.func1.1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/work/ariga/ariga/atlantis/ent/runtime/runtime.go:33 +0xa2
entgo.io/ent.MutateFunc.Mutate(0x6596c58, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
ariga.io/atlantis/ent.historyMutator.AtlasSchemaHistoryMutateHook.func1({0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/work/ariga/ariga/atlantis/ent/enthistory.go:54 +0x16c
entgo.io/ent.MutateFunc.Mutate(0x6570dc0, {0x658e9d0, 0xc000336390}, {0x65b52f0, 0xc000148630})
	/Users/runner/go/pkg/mod/entgo.io/[email protected]/ent.go:347 +0x3d
ariga.io/atlantis/ent.(*AtlasSchemaCreate).Save(0xc0002960a0, {0x658e9d0, 0xc000336390})
	/Users/runner/work/ariga/ariga/atlantis/ent/atlasschema_create.go:97 +0x22c
ariga.io/atlantis/ent.(*AtlasSchemaCreate).SaveX(...)
	/Users/runner/work/ariga/ariga/atlantis/ent/atlasschema_create.go:106
ariga.io/atlantis/cmd/atlas/web.Inspect({0x658e928, 0xc0000c61c0}, 0xc000148580, {0x20b330808, 0x45}, {0x0, 0xc0000c61c0, 0xc0005f4d40})
	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/web/web.go:55 +0x405
main.initConfig.func1({0x658e928, 0xc0000c61c0}, 0xc0000c61c0)
	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:48 +0x46
main.run({0x4ea6bdf, 0xe}, 0x643ff10)
	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:77 +0x10c
main.registerWeb.func1(0x6d87460, {0x4e8eb10, 0x3, 0x3})
	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:58 +0x25
github.com/spf13/cobra.(*Command).execute(0x6d87460, {0xc0000be5a0, 0x3, 0x3})
	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0x6d86a60)
	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
main.main()
	/Users/runner/work/ariga/ariga/atlantis/cmd/atlas/main.go:35 +0x115

Version

  • atlas CLI version v0.3.3
  • MacOS 12.2

Best Regards,

[BUG] multiple schemas that contains tables by the same name raise decoding error.

hey guys,

atlas version 0.3.4 (also appears on 0.3.2)

given the following HCL, which defines 2 schemas, both of them have a table called "some_table"

table "some_table" {
  schema = schema.some_schema_1
  column "my_key" { type = varchar(255) }
  column "value" { type = longtext }
  primary_key {
    columns = [table.some_table.column.my_key]
  }
  index "my_key" {
    unique  = true
    columns = [table.some_table.column.my_key]
  }
}
table "some_table" {
  schema = schema.some_schema_2
  column "key" { type = varchar(128) }
  column "value" { type = longtext }
  primary_key {
    columns = [table.some_table.column.key]
  }
  index "key" {
    unique  = true
    columns = [table.some_table.column.key]
  }
}
schema "some_schema_1" {}
schema "some_schema_2" {}
atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' --schema some_schema_1,some_schema_2 -f schemas.hcl
Error: schemahcl: failed decoding: :6,39-46: Unsupported attribute; This object does not have an attribute named "my_key".

my analysis is that the 2nd definition overrides the 1st definition so "my_key" gets discarded

if i switch the order of the table definitions (placing some_schema_2.some_table before some_schema_1.some_table) and run again:

atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' --schema some_schema_1,some_schema_2 -f schemas.hcl
Error: schemahcl: failed decoding: :6,39-43: Unsupported attribute; This object does not have an attribute named "key".

i get that "key" is missing which supports my analysis

btw, providing the same "key" on both tables like this:

table "some_table" {
  schema = schema.some_schema_2
  column "key" { type = varchar(128) }
  column "value" { type = longtext }
  primary_key {
    columns = [table.some_table.column.key]
  }
  index "key" {
    unique  = true
    columns = [table.some_table.column.key]
  }
}
table "some_table" {
  schema = schema.some_schema_1
  column "key" { type = varchar(255) }
  column "value" { type = longtext }
  primary_key {
    columns = [table.some_table.column.key]
  }
  index "key" {
    unique  = true
    columns = [table.some_table.column.key]
  }
}
schema "some_schema_1" {}
schema "some_schema_2" {}

works fine...

atlas schema apply -d 'mysql://root:root@tcp(localhost:3306)/' --schema some_schema_1,some_schema_2 -f schemas.hcl
-- Planned Changes:
-- Add new schema named "some_schema_1"
CREATE DATABASE `some_schema_1`
-- Add new schema named "some_schema_2"
CREATE DATABASE `some_schema_2`
-- Create "some_table" table
CREATE TABLE `some_schema_1`.`some_table` (`key` varchar(255) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`key`), UNIQUE INDEX `key` (`key`))
-- Create "some_table" table
CREATE TABLE `some_schema_2`.`some_table` (`key` varchar(128) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`key`), UNIQUE INDEX `key` (`key`))

notice, the column "key" definition can be different (eg varchar(255 vs 128)), atlas does not confuse when it creates the sql statements.
so the problem is probably due to overriding the reference definition, which in that case just doesn't conflict...

cheers.

is AWS Redshift supported?

Hi ๐Ÿ‘‹๐Ÿผ

I was just wondering if AWS Redshift is supposed to work with Atlas since if I run

psql "postgresql://USER:PASSWORD@HOST:PORT/DBNAME"

the connection is made

but doing

atlas schema inspect --url "postgres://USER:PASSWORD@HOST:PORT/DBNAME?sslmode=disable"

throws the following error Error: postgres: unexpected number of rows: 0

Any thoughts? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.