shayonj / pg-osc Goto Github PK
View Code? Open in Web Editor NEWEasy CLI tool for making zero downtime schema changes and backfills in PostgreSQL
License: MIT License
Easy CLI tool for making zero downtime schema changes and backfills in PostgreSQL
License: MIT License
One idea I had for pg-osc
would be to do some quick observations about the table (and related tables) before actually starting the process. We could warn the user about any potential issues with large tables, constraints, etc. Or, we could just keep this functionality as it's own process that runs with a preview command instead of perform. Just thoughts. Here's what the output could look like:
pg-online-schema-change preview {all the same alter/connection options as perform}
Table to alter: table_name_from_alter_statement
Estimated number of records in table: 345,000,000
Size of table on disk: 30GB
Estimated time to write new table: 8 hours (based on a test run of 1% of your records)
Estimated time to validate constraints: 6 hours (based on ?)
Constraints to validate:
- Referential foreign key: column_name -> foreign_key_table (1.2B records) (WARNING: this could take a significant amount of time due to the large number of records)
- Referential foreign key: another_column_name -> another_table (200 records)
- Self foreign key: some_table has foreign key to table_name_from_alter_statement.id (20M records)
- Self foreign key: some_table2 has foreign key to table_name_from_alter_statement.id (2M records)
- Self foreign key: some_table3 has foreign key to table_name_from_alter_statement.id (2B records) (WARNING: this could take a significant amount of time due to the large number of records)
Suggestions:
- Given the large number of records in some related tables you may want to use the `--skip-foreign-key-validation` flag and then validate constraints separately to speed up the process.
- ???
Hi pg-osc team,
we successfully used your tool to rebuild a bloated table on production without any downtime! It shrank from around 400GB down to 100GB.
During the testing phase we had to apply some custom adoptions to the source code which we would like to contribute back π
This one handles a bug where an INSERT into the primary table fails during the execution on a large table name ( e.g. this_is_a_table_with_a_very_long_name
).
drop table if exists "this_is_a_table_with_a_very_long_name";
CREATE TABLE IF NOT EXISTS "this_is_a_table_with_a_very_long_name" (
id int PRIMARY KEY,
"createdOn" TIMESTAMP NOT NULL
);
insert into "this_is_a_table_with_a_very_long_name"("id", "createdOn") values(1, '2012-01-01')
bundle exec bin/pg-online-schema-change perform --host localhost --dbname postgres --username celonis --alter-statement 'alter table this_is_a_table_with_a_very_long_name alter column id TYPE int'
==> Result: The INSERT into the primary table fails
Read the real sequence name using SELECT pg_get_serial_sequence(:table, :column
)
Idea: pgosc
should support the ability to reverse the change (with no data loss) after the alter
statements and swap has happened. pgosc
should make sure that the data is being replayed in both directions (tables) before and after the swap. So in case of any issues, you can always go back to the original table.
Requires re-acrchitecting some core constructs. Most things should be re-usable.
A separate command/invocation point can be used to go back to the previous state. I am thinking -
pg-online-schema-change perform -a "ALTER..." --drop false ....
pg-online-schema-change reverse -t "books"
This involves re-transferring the FKs and running analyze (?).
Hi @shayonj ,
I found, that there is an issue with performing the pg-osc
command more than once, when we didn't use --drop
. When I haven't used --drop
and ran the command, it works fine and created a old table. Now, again I tried to run the command on the same table, but now its raising an error as table already exits as below.
ERROR: relation "pgosc_op_table_employees" already exists (PG::DuplicateTable)
Solution:
I guess, when renaming the old table, you may have give the name format as "pgosc_op_table_{tablename}"--> this will cause the error, To resolve this, You may change the format to "pgosc_op_table_{tablename}_{index}"
Here index/id can be a unique value.
Hi,
Is this pg-osc, only available for ALTER TABLE statements or does it work for any ALTER statements, like ALTER INDEX and some more..?
As when I tried to run the following alter statement with pg-osc , got an error metioned below.
pg-online-schema-change perform --alter-statement 'ALTER INDEX IF EXISTS unq_trace_id RENAME TO modified_name;' --dbname "testdb" --host "localhost" --username "postgres" --wait-time-for-lock 5 --kill-backends --drop
Error:
/var/lib/gems/3.0.0/gems/pg_online_schema_change-0.9.4/lib/pg_online_schema_change/orchestrate.rb:56:in `run!': Parent table has no primary key, exiting... (PgOnlineSchemaChange::Error)
When pgosc is copying, replaying, etc - no other process/transaction should be able to perform DDL on the primary table.
Can be achieved by holding a access share lock on the primary table, except during swap. Probably from separate connection (?)
I ran into an issue recently while testing this library on a rather large (250GB) table on Heroku. I've used this tool in a few other cases but this is the largest table I've tried yet. Here's the message I got from Heroku while running the alter statement:
Your database is currently unable to keep up with writes.
Specifically, Postgres is unable to archive Postgres write-ahead logs (WAL) fast enough to keep up with write volume. WAL archiving is critical to maintaining continuous protection of data.
If the WAL drive fills completely, the database will shut down and will be at risk of data loss. To prevent this, Heroku is temporarily throttling your database connections to allow the backlog of WAL to be archived.
Heroku strongly recommends pausing or reducing any bulk data loading activity that is running.
Heroku will remove the connection throttling once WAL archiving is able to keep up with database writes.
Read more about this here: https://devcenter.heroku.com/articles/postgres-write-ahead-log-usage
So, it was writing too fast and eventually the connection was killed. My assumption is that it might be able to succeed if I slow down the batches a bit by adding an optional delay/sleep argument for each batch. I don't really want the job to take even longer but it might be the only way in this situation. What are your thoughts on adding this? I'm happy to do the work, I just wanted to check to see if there were other potential options for this case before I opened a PR.
Thanks.
pgosc disables autovacuum for the new table it creates - (probably because we want to go with faster inserts ),so we need to re-enable autovacuum manually for the new fresh table.
Also, it gives some weird name to sequence and primary key which we also need to rename back/correct. In my case , there was a dependent function on that table sequence ,so that stopped working :)
It appears that views that reference the table being re-written are updated at some point to use the temp pgosc_op_table...
table but after the swap they continue to reference that temp table, preventing it from being dropped cleanly (you'll get errors mentioning cannot drop table pgosc_op_table_... because other objects depend on it
). This appears to be because Postgres stores the view as parsed query tree that keeps a reference to the object identifier instead of just using the name. I found a StackOverflow post that mentions this . They mention that using CREATE OR REPLACE VIEW
can fix this, but we'd have to store the original SQL for the view before the swap and then call CREATE OR REPLACE VIEW
immediately after the swap to reset it to the original table name.
After we have initiated the swap (when remaining rows are below 20), we should do one last replay of all rows from audit to shadow before rename to account for any rows missed in the time it takes to execute the swap. This is a race condition that can happen on tables with extreme high amount of writes. Tweaking DELTA_COUNT
is also another option (which can be exposed via flag).
We want to make it working with the table with trigger :)
If we allow users the ability to modify the SQL used to populate the new table in copy_data!
, then we enable backfilling of new or existing columns. This is an advanced feature which needs a stern warning that the wrong query will eat your data.
The naming can get weird or look weird when you pgosc has run multiple times on a table. It will always prefix the the prefix_key and additional run into name length limit issues.
This also applies to sequence names.
Can happen as part of cleanup.
We should be able to do the following to have a high signal integration spec
pgbench --initialize -s 10
CREATE TABLE pgbench_accounts_validate AS SELECT * FROM pgbench_accounts ;
ALTER TABLE pgbench_accounts_validate ADD PRIMARY KEY (aid);
pgbench --file spec/fixtures/bench.sql -T 600 -c 2
, playing the following\set aid random(1, 100000 * :scale)
\set bid random(1, 1 * :scale)
\set tid random(1, 10 * :scale)
\set delta random(-5000, 5000)
BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
UPDATE pgbench_accounts_validate SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
END;
pgbench_accounts
table matches in content with pgbench_accounts_validate
.Can use rspec to setup and assert with the following
(TABLE pgbench_accounts EXCEPT TABLE pgbench_accounts_validate)
UNION ALL
(TABLE pgbench_accounts_validate EXCEPT TABLE pgbench_accounts);
h/t to @jfrost
We should modify readme to not use it with partitioned table :) atleast for now/unless we address these issues :) .
When I inserted some data on my partitioned table and after that just quickly ran pgosc on that partitioned table in order to check out how does it work.. it actually , created another normal table -added column there but didn't copy the data
-- Here are the tests:
-- my partitioned table with some data:
CREATE TABLE shiwangini.events2 (
device_id bigint,
event_id bigserial,
event_time timestamptz default now(),
data jsonb not null,
PRIMARY KEY (device_id, event_id)
)PARTITION BY Hash (device_id);
create table shiwangini.events2001 partition of shiwangini.events2 FOR VALUES WITH (modulus 4, remainder 1);
create table shiwangini.events2002 partition of shiwangini.events2 FOR VALUES WITH (modulus 4, remainder 2);
create table shiwangini.events2003 partition of shiwangini.events2 FOR VALUES WITH (modulus 4, remainder 3);
create table shiwangini.events2004 partition of shiwangini.events2 FOR VALUES WITH (modulus 4, remainder 0);
INSERT INTO shiwangini.events2 (device_id, data)
SELECT s % 100, ('{"measurement":'||random()||'}')::jsonb FROM generate_series(1,100000) s;
-- I can see the data now:
table shiwangini.events2 ;
Now, I ran below pgosc command to add a column.. the statement executed successfully on console:
pg-online-schema-change perform --alter-statement 'alter table events2 add column "name" varchar ;' --schema "shiwangini" --dbname "dev" --host "xx.xx.xx.xx" --username "dev" --pull-batch-count 1000 --delta-count 20 --wait-time-for-lock 5
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":40,"time":"2022-03-19T17:28:55.211+00:00","v":0,"msg":"DEPRECATED: -w is deprecated. Please pass PGPASSWORD environment variable instead.","version":"0.7.1"}
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.240+00:00","v":0,"msg":"Setting up audit table","audit_table":"pgosc_at_events2_f6ffc9","version":"0.7.1"}
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.251+00:00","v":0,"msg":"Setting up triggers","version":"0.7.1"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is no transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.258+00:00","v":0,"msg":"Setting up shadow table","shadow_table":"pgosc_st_events2_f6ffc9","version":"0.7.1"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.276+00:00","v":0,"msg":"Running alter statement on shadow table","shadow_table":"pgosc_st_events2_f6ffc9","parent_table":"events2","version":"0.7.1"}
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.279+00:00","v":0,"msg":"Clearing contents of audit table before copy..","shadow_table":"pgosc_st_events2_f6ffc9","parent_table":"events2","version":"0.7.1"}
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.281+00:00","v":0,"msg":"Copying contents..","shadow_table":"pgosc_st_events2_f6ffc9","parent_table":"events2","version":"0.7.1"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is no transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.290+00:00","v":0,"msg":"Performing ANALYZE!","version":"0.7.1"}
INFO: analyzing "shiwangini.events2" inheritance tree
INFO: "events2001": scanned 350 of 350 pages, containing 30000 live rows and 0 dead rows; 9013 rows in sample, 30000 estimated total rows
INFO: "events2002": scanned 291 of 291 pages, containing 25000 live rows and 0 dead rows; 7494 rows in sample, 25000 estimated total rows
INFO: "events2003": scanned 198 of 198 pages, containing 17000 live rows and 0 dead rows; 5099 rows in sample, 17000 estimated total rows
INFO: "events2004": scanned 326 of 326 pages, containing 28000 live rows and 0 dead rows; 8394 rows in sample, 28000 estimated total rows
INFO: analyzing "shiwangini.events2001"
INFO: "events2001": scanned 350 of 350 pages, containing 30000 live rows and 0 dead rows; 30000 rows in sample, 30000 estimated total rows
INFO: analyzing "shiwangini.events2002"
INFO: "events2002": scanned 291 of 291 pages, containing 25000 live rows and 0 dead rows; 25000 rows in sample, 25000 estimated total rows
INFO: analyzing "shiwangini.events2003"
INFO: "events2003": scanned 198 of 198 pages, containing 17000 live rows and 0 dead rows; 17000 rows in sample, 17000 estimated total rows
INFO: analyzing "shiwangini.events2004"
INFO: "events2004": scanned 326 of 326 pages, containing 28000 live rows and 0 dead rows; 28000 rows in sample, 28000 estimated total rows
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.872+00:00","v":0,"msg":"Remaining rows below delta count, proceeding towards swap","version":"0.7.1"}
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.872+00:00","v":0,"msg":"Performing swap!","version":"0.7.1"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.874+00:00","v":0,"msg":"Replaying rows, count: 0","version":"0.7.1"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is no transaction in progress
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.878+00:00","v":0,"msg":"Performing ANALYZE!","version":"0.7.1"}
INFO: analyzing "shiwangini.events2"
INFO: "events2": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.879+00:00","v":0,"msg":"Validating constraints!","version":"0.7.1"}
{"name":"pg-online-schema-change","hostname":"dev-host","pid":1714106,"level":30,"time":"2022-03-19T17:28:55.887+00:00","v":0,"msg":"All tasks successfully completed","version":"0.7.1"}
But,now when I run table shiwangini.events2;
I don't see any data. However, when I manually check schema of events2 table - I see name column has been added.(but, I don't see any data there)
-- shiwangini.events2 definition
-- Drop table
-- DROP TABLE shiwangini.events2;
CREATE TABLE shiwangini.events2 (
device_id int8 NOT NULL,
event_id int8 NOT NULL DEFAULT nextval('shiwangini.pgosc_st_events2_f6ffc9_event_id_seq'::regclass),
event_time timestamptz NULL DEFAULT now(),
"data" jsonb NOT NULL,
"name" varchar NULL,
CONSTRAINT pgosc_st_events2_f6ffc9_pkey PRIMARY KEY (device_id, event_id)
)
WITH (
autovacuum_enabled=false
);
However , I see another (probably old table with the name : pgosc_op_table_events2) - which has old data and old partitions and exactly old schema. If I specify --drop
in the end , the old table with all data get dropped :D
In the Installation section
Or install it yourself as:
$ gem install pg_online_schema_change
Some dependency installation commands need to be added, not everyone is a Ruby expert
The dependencies
gem install ougai
gem install thor
gem install pg
gem install pg_query
Tracking
Once tasks are performed, drop the old table and clean up. Dropping the old table can be controlled via a new flag. The --dry-run
flag is not used currently either. For cleanup, we should
statement timeout
client_min_messages
Hi,
I would be interested in using pg-osc
but without using any alter statement. My use case would be to get rid of a bloated table by creating a new table and filling it with (a delta) of the original table.
E.g. like this:
pg-online-schema-change perform \
--dbname "postgres" \
--host "localhost" \
--username "jamesbond" \
Would be adding a "fake" alter statement be a workaround or do you have any other ideas?
--alter-statement 'ALTER TABLE books ADD COLUMN email IF NOT EXISTS new_email varchar'
I had the idea this morning to do a validation test by using a custom pgbench script that applies the same update to two different tables. The setup looks like this:
pgbench -i -s 500 pgbench
CREATE TABLE pgbench_accounts_validate AS SELECT * FROM pgbench_accounts ;
ALTER TABLE pgbench_accounts_validate ADD PRIMARY KEY (aid);
Validate that the tables match before the test:
(TABLE pgbench_accounts EXCEPT TABLE pgbench_accounts_validate)
UNION ALL
(TABLE pgbench_accounts_validate EXCEPT TABLE pgbench_accounts)
;
βββββββ¬ββββββ¬βββββββββββ¬βββββββββ
β aid β bid β abalance β filler β
βββββββΌββββββΌβββββββββββΌβββββββββ€
βββββββ΄ββββββ΄βββββββββββ΄βββββββββ
(0 rows)
Put this:
\set aid random(1, 100000 * :scale)
\set bid random(1, 1 * :scale)
\set tid random(1, 10 * :scale)
\set delta random(-5000, 5000)
BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
UPDATE pgbench_accounts_validate SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
END;
Into ~/git/pg-online-schema-change/pgbench-validate.sql
Kick off a pgbench like so:
pgbench --file ~/git/pg-online-schema-change/pgbench-validate.sql -T 180 pgbench
Validate that the tables are still the same:
pgbench=# (TABLE pgbench_accounts EXCEPT TABLE pgbench_accounts_validate) UNION ALL (TABLE pgbench_accounts_validate EXCEPT TABLE pgbench_accounts) ;
βββββββ¬ββββββ¬βββββββββββ¬βββββββββ
β aid β bid β abalance β filler β
βββββββΌββββββΌβββββββββββΌβββββββββ€
βββββββ΄ββββββ΄βββββββββββ΄βββββββββ
(0 rows)
Now, kick off the same pgbench as above, but also do a pg-osc run at the same time:
pgbench --file ~/git/pg-online-schema-change/pgbench-validate.sql -T 180 pgbench
and
bundle exec bin/pg-online-schema-change perform --alter-statement 'ALTER TABLE pgbench_accounts ALTER COLUMN aid TYPE BIGINT;' -d pgbench --host localhost --username pgosc --password '' --drop
And compare again after it is all done:
(23314 rows)
returned by the EXCEPT query. Hereβs a sample row in both tables:
pgbench=# select * from pgbench_accounts where aid = 99996;
βββββββββ¬ββββββ¬βββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β aid β bid β abalance β filler β
βββββββββΌββββββΌβββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 99996 β 1 β 8620 β β
βββββββββ΄ββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(1 row)
Time: 1.589 ms
pgbench=# select * from pgbench_accounts_validate where aid = 99996;
βββββββββ¬ββββββ¬βββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β aid β bid β abalance β filler β
βββββββββΌββββββΌβββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 99996 β 1 β 10721 β β
βββββββββ΄ββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(1 row)
The main issue I've ran into when using pg-osc
has been at the very end of the process. If I have foreign keys on the table I'm modifying it tends to lock up during the constraint validation phase. I think this happens because a ShareLock
is acquired when the constraint is a foreign key. I'd love to ensure my foreign keys remain valid after the swap but I can't afford to lock up the table when doing it. Is there a way around this? Maybe a flag to turn off constraint validation?
For tables that have a very high read/write volume, replaying 1k rows at once may not be enough for the replay to catch up to perform the swap. Making it configurable via a flag/option parameter would be nice.
We can add an additional safety measure against the custom SQL destroying all or most of someone's table. Simply compare the results from SELECT reltuples FROM pg_class WHERE relname = <old table>
against the same query for the new table after the ANALYZE has been run, but before the tables are swapped. Since this is an estimate, we should probably use a comparison that looks something like >= 0.95 * old_tuples
. We should also add a flag like --copy_percentage
that lets you set a lower threshold for that comparison for use cases where the user is purposely deleting much of the table data.
This would help guard against a user not understanding the documentation and using something unfortunate like:
-- file: /src/query.sql
INSERT INTO %{shadow_table}(foo, bar, baz, rental_id, tenant_id)
SELECT 1,1,1,1,1
We setup the shadow table with the FKs (if any) so its structurally the same as the primary table since beginning
During INSERT INTO
(copying date from primary table to shadow) it can become a blocking operation as the new data is being validated by the FK. Rather, we can trust the FK on the primary table and add back the FKs on the shadow table prior to swap, so post swap they are structurally the same. We can accordingly extend validate_constraint
to validate constraint on primary and referential tables.
We use operation_type
on audit table to under stand if the operation is INSERT/UPDATE/DELETE. The column is generic enough to conflict with other columns. We should make it more obscure and leave a note in README.
This would be more secure.
If a SIGTERM comes in (when running in cloud/container/etc) or a SIGINT (ctrl+c) we should cleanup and then exit. We can start with
During the swap, we will need to be knowledgeable of any other long queries happening against the primary table. In which case, should we set a lock_timeout
and cancel other queries?
Is there a better experience? -
--wait-timeout
)pgosc
couldn't acquire a lock for the swap perhaps? (continue replaying during this time).
pgosc
should do the cleanup.--no-kill-backend
and --wait-timeout
)I have a CSV file with millions of rows which I need to add them into a table. Either by truncate and load or directly load .
Can I do it using pg-osc? If yes, how can I?
The cleanup hangs if you do a ctrl+c / SIGINT during the replay operation. I suspect is IO related but needs more debugging...
Hi,
I have my live project, that I want to use pg-osc to alter my tables where those tables have billions of rows of data. Now, how can I do it.
I have only idea of every time calling the pg-osc command with alter statements manually.
Other than, this is there any way to do the same task?
There is a potential edge case right now when reading from audit table. Since the read uses the primary_key of the primary table, it may be out of order if two updates happen for the same row.
Its rather better to have a dedicated id field (PK) on the audit table, that way when reading we can have ordered entries w/o using a timestamp field
This field name, needs to be non conflicting (with primary table) as well. Similar to: #47
First, thank you for creating this project. It's great to have something that can automate table re-writing with all the best practices to avoid downtime.
When I used this a few months back everything worked great besides the primary key's sequence value. The value was left at 1 which meant I immediately starting seeing duplicate key issues. When I figured out what was happening all I needed to do was manually set the value of the sequence to the max id + 1 of the primary key and things started working again.
Is this a known issue? I've seen other issues around sequence naming but I haven't seen others mention the actual sequence value. I feel like I'm missing something.
Would there be any potential issues with executing something like this pseudocode at the end of the swap!
method?
SELECT SETVAL('#{client.pk_sequence}', (SELECT MAX(#{client.pk}) FROM #{client.table})+1);
Another idea I had was to simply allow a configurable --post-swap-statement
option that would run custom SQL to do whatever lingering cleanup was needed for the user's specific case.
It appears that auto-vacuum is turned off via the disable_vacuum!
method but it's never turned back on. Is this intentional?
Tracking
Rolling out a sem-ver based release process using CI and git based tagging. Artifacts:
Will need to look whats out there to support docker image releases.
I'm currently attempting to use ps-osc
on a two billion row, 482 GB (223 table/259 index) table. Needless to say, it's taking a long time. I'm actually starting to doubt if it's actually working as the output is just this so far after twenty hours:
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":40,"time":"2023-07-27T17:34:49.736-04:00","v":0,"msg":"DEPRECATED: -w is deprecated. Please pass PGPASSWORD environment variable instead.","version":"0.9.2"}
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:51.126-04:00","v":0,"msg":"Setting up audit table","audit_table":"pgosc_at_mytable_c16c36","version":"0.9.2"}
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:51.382-04:00","v":0,"msg":"Setting up triggers","version":"0.9.2"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
WARNING: there is no transaction in progress
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:51.714-04:00","v":0,"msg":"Setting up shadow table","shadow_table":"pgosc_st_mytable_c16c36","version":"0.9.2"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:51.937-04:00","v":0,"msg":"Running alter statement on shadow table","shadow_table":"pgosc_st_mytable_c16c36","parent_table":"mytable","version":"0.9.2"}
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:52.051-04:00","v":0,"msg":"Clearing contents of audit table before copy..","shadow_table":"pgosc_st_mytable_c16c36","parent_table":"mytable","version":"0.9.2"}
WARNING: there is already a transaction in progress
{"name":"pg-online-schema-change","hostname":"Bryces-MacBook-Pro-M2.local","pid":19905,"level":30,"time":"2023-07-27T17:34:52.148-04:00","v":0,"msg":"Copying contents..","shadow_table":"pgosc_st_mytable_c16c36","parent_table":"mytable","version":"0.9.2"}
WARNING: there is already a transaction in progress
WARNING: there is already a transaction in progress
Is there a way to show any sort of progress indicator while the tool is running? I realize this would be challenging as the copy_data!
command is just running a query inside a larger transaction. I'm not sure if it's even possible to get visibility into this, but it sure would be nice!
This is probably related to #112 and #117. When script tries to get view definition it fails because view is not from public schema
As you can see it tries to select pg_get_viewdef(format('%I.%I', 'public', dependent_view.relname)::regclass) as view_definition and not all of my views are from public schema. Probably should be something like
SELECT DISTINCT dependent_view.relname as view_name, pg_get_viewdef(format('%I.%I', view_ns.nspname, dependent_view.relname)::regclass) as view_definition
FROM pg_depend
JOIN pg_rewrite ON pg_depend.objid = pg_rewrite.oid
JOIN pg_class as dependent_view ON pg_rewrite.ev_class = dependent_view.oid
JOIN pg_class as source_table ON pg_depend.refobjid = source_table.oid
JOIN pg_namespace source_ns ON source_ns.oid = source_table.relnamespace
JOIN pg_namespace view_ns ON view_ns.oid = dependent_view.relnamespace
where
source_ns.nspname = 'public'
AND source_table.relname = 'pgosc_op_table_users';
Tracking
After FK's are refreshed (dropped and added) validate constraint from a new transaction. This means it will also happen post swap/rename, since we have limited window during the swap. Post swap the validation constraint will fail hard.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.