Giter Site home page Giter Site logo

mydumper / mydumper Goto Github PK

View Code? Open in Web Editor NEW
2.4K 82.0 435.0 4.29 MB

Official MyDumper Project

License: GNU General Public License v3.0

CMake 3.86% C 89.98% C++ 0.04% Shell 4.62% Dockerfile 1.49% Makefile 0.01%
mysql mariadb database percona tidb multithreading replication

mydumper's Introduction

CircleCI

What is MyDumper?

MyDumper is a MySQL Logical Backup Tool. It has 2 tools:

  • mydumper which is responsible to export a consistent backup of MySQL databases
  • myloader reads the backup from mydumper, connects to the destination database and imports the backup.

Both tools use multithreading capabilities.
MyDumper is Open Source and maintained by the community, it is not a Percona, MariaDB or MySQL product.

Why do we need MyDumper?

  • Parallelism (hence, speed) and performance (avoids expensive character set conversion routines, efficient code overall)
  • Easier to manage output (separate files for tables, dump metadata, etc, easy to view/parse data)
  • Consistency - maintains snapshot across all threads, provides accurate master and slave log positions, etc
  • Manageability - supports PCRE for specifying database and tables inclusions and exclusions

How to install mydumper/myloader?

First get the correct url from the releases section then:

RedHat / Centos

release=$(curl -Ls -o /dev/null -w %{url_effective} https://github.com/mydumper/mydumper/releases/latest | cut -d'/' -f8)
yum install https://github.com/mydumper/mydumper/releases/download/${release}/mydumper-${release:1}.el7.x86_64.rpm
yum install https://github.com/mydumper/mydumper/releases/download/${release}/mydumper-${release:1}.el8.x86_64.rpm

Ubuntu / Debian

For ubuntu, you need to install the dependencies:

apt-get install libatomic1

Then you can download and install the package:

release=$(curl -Ls -o /dev/null -w %{url_effective} https://github.com/mydumper/mydumper/releases/latest | cut -d'/' -f8)
wget https://github.com/mydumper/mydumper/releases/download/${release}/mydumper_${release:1}.$(lsb_release -cs)_amd64.deb
dpkg -i mydumper_${release:1}.$(lsb_release -cs)_amd64.deb

FreeBSD

By using pkg

pkg install mydumper

or from ports

cd /usr/ports/databases/mydumper && make install

MacOS

By using Homebrew

brew install mydumper

Take into account that the mydumper.cnf file is going to be located on /usr/local/etc or /opt/homebrew/etc. So, you might need to run mydumper/myloader with:

mydumper --defaults-file=/opt/homebrew/etc/mydumper.cnf
myloader --defaults-file=/opt/homebrew/etc/mydumper.cnf

Dependencies for building MyDumper

Install development tools:

  • Ubuntu or Debian:
apt-get install cmake g++ git
  • Fedora, RedHat and CentOS:
yum install -y cmake gcc gcc-c++ git make
  • MacOS <= 10.13 (High Sierra) < 13 (Ventura) with MacPorts package manager:
sudo port install cmake pkgconfig

Install development versions of GLib, ZLib, PCRE and ZSTD:

  • Ubuntu or Debian:
apt-get install libglib2.0-dev zlib1g-dev libpcre3-dev libssl-dev libzstd-dev
  • Fedora, RedHat and CentOS:
yum install -y glib2-devel openssl-devel pcre-devel zlib-devel libzstd-devel
  • openSUSE:
zypper install glib2-devel libmysqlclient-devel pcre-devel zlib-devel
  • MacOS <= 10.13 (High Sierra) < 13 (Ventura) with MacPorts package manager:
sudo port install glib2 pcre

Install MySQL/Percona/MariaDB development versions:

You need to select one vendor development library.

  • Ubuntu or Debian:
apt-get install libmysqlclient-dev
apt-get install libperconaserverclient20-dev
apt-get install libmariadbclient-dev
  • Fedora, RedHat and CentOS:
yum install -y mysql-devel
yum install -y Percona-Server-devel-57
yum install -y mariadb-devel

CentOS 7 comes by default with MariaDB 5.5 libraries which are very old. It might be better to download a newer version of these libraries (MariaDB, MySQL, Percona etc).

  • openSUSE:
zypper install libmysqlclient-devel
  • MacOS <= 10.13 (High Sierra) < 13 (Ventura) with MacPorts package manager
sudo port install mariadb-10.11
sudo port select mysql

How to Build

Run:

cmake .
make
sudo make install

One has to make sure, that pkg-config, mysql_config, pcre-config are all in $PATH

Binlog dump is disabled by default to compile with it you need to add -DWITH_BINLOG=ON to cmake options

To build against mysql libs < 5.7 you need to disable SSL adding -DWITH_SSL=OFF

Build Docker image

You can download the official docker image or you can build the Docker image either from local sources or directly from Github sources with the provided Dockerfile.

docker build --build-arg CMAKE_ARGS='-DWITH_ZSTD=ON' -t mydumper github.com/mydumper/mydumper

Keep in mind that the main purpose the Dockerfile addresses is development and build from source locally. It might not be optimal for distribution purposes, but can also work as a quick build and run solution with the above one-liner, though.

How to use MyDumper

See Usage

How does consistent snapshot work?

This is all done following best MySQL practices and traditions:

  • As a precaution, slow running queries on the server either abort the dump, or get killed
  • Global read lock is acquired ("FLUSH TABLES WITH READ LOCK")
  • Various metadata is read ("SHOW SLAVE STATUS","SHOW MASTER STATUS")
  • Other threads connect and establish snapshots ("START TRANSACTION WITH CONSISTENT SNAPSHOT") ** On pre-4.1.8 it creates a dummy InnoDB table, and reads from it.
  • Once all worker threads announce the snapshot establishment, master executes "UNLOCK TABLES" and starts queueing jobs.

This for now does not provide consistent snapshots for non-transactional engines - support for that is expected in 0.2 :)

How to exclude (or include) databases?

Once can use --regex functionality, for example not to dump mysql, sys and test databases:

 mydumper --regex '^(?!(mysql\.|sys\.|test\.))'

To dump only mysql and test databases:

 mydumper --regex '^(mysql\.|test\.)'

To not dump all databases starting with test:

 mydumper --regex '^(?!(test))'

To dump specific tables in different databases (Note: The name of tables should end with $. related issue):

 mydumper --regex '^(db1\.table1$|db2\.table2$)'

If you want to dump a couple of databases but discard some tables, you can do:

 mydumper --regex '^(?=(?:(db1\.|db2\.)))(?!(?:(db1\.table1$|db2\.table2$)))'

Which will dump all the tables in db1 and db2 but it will exclude db1.table1 and db2.table2

Of course, regex functionality can be used to describe pretty much any list of tables.

How to use --exec?

You can execute external commands with --exec like this:

 mydumper --exec "/usr/bin/gzip FILENAME"

--exec is single threaded, similar implementation than Stream. The exec program must be an absolute path. FILENAME will be replaced by the filename that you want to be processed. You can set FILENAME in any place as an argument.

Defaults file

The default file (aka: --defaults-file parameter) is starting to be more important in MyDumper

  • mydumper and myloader sections:
[mydumper]
host = 127.0.0.1
user = root
password = p455w0rd
database = db
rows = 10000

[myloader]
host = 127.0.0.1
user = root
password = p455w0rd
database = new_db
innodb-optimize-keys = AFTER_IMPORT_PER_TABLE
  • Variables for mydumper and myloader executions:

Prior to v0.14.0-1:

[mydumper_variables]
wait_timeout = 300
sql_mode = ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION

[myloader_variables]
long_query_time = 300
innodb_flush_log_at_trx_commit = 0

From to v0.14.0-1:

[mydumper_session_variables]
wait_timeout = 300
sql_mode = ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION

[mydumper_global_variables]
sync_binlog = 0
slow_query_log = OFF

[myloader_session_variables]
long_query_time = 300

[myloader_global_variables]
sync_binlog = 0
innodb_flush_log_at_trx_commit = 0
  • Per table sections:
[`db`.`table`]
where = column > 20
limit = 10000

[`myd_test`.`t`]
columns_on_select=qty,price+20
columns_on_insert=qty,price

IMPORTANT: when using options that don't require an argument like: --no-data or --events, you need to set any value to those variables which will always indicate: TRUE/ON/ENABLE. It is a MISCONCEPTION if you think that adding --no-data=0 will export data:

[mydumper]
no-data=0

Will NOT export the data as no-data is being specified.

mydumper's People

Contributors

007gzs avatar bajrang0789 avatar cezmunsta avatar david-ducos-percona avatar davidducos avatar delissonjunio avatar dupondje avatar fadeenk avatar flttgjq avatar frankkkkk avatar fredricj avatar inflatador avatar jackwener avatar lenzgr avatar linuxjedi avatar maxbube avatar midenok avatar midom avatar pataquets avatar phillip85 avatar shakaran avatar shihuafan avatar sjmudd avatar stephenreay avatar superq avatar tanji avatar thadraibley avatar tsolodov avatar verwilst avatar waptaff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mydumper's Issues

make error when compile with parameter -DWITH_BINLOG=ON

@maxbube
When I use cmake to compile with parameter -DWITH_BINLOG=ON then execute make , I got an error like this:
# cmake . -DWITH_BINLOG=ON
# make

  • Scanning dependencies of target mydumper
  • [ 20%] Building C object CMakeFiles/mydumper.dir/mydumper.c.o
  • /root/mydumper-master/mydumper.c: In function ‘binlog_thread’:
  • /root/mydumper-master/mydumper.c:1028:17: error: ‘thrconn’ undeclared (first use in this function)
  • mysql_options(thrconn,MYSQL_READ_DEFAULT_FILE,defaults_file);
  •              ^
    
  • /root/mydumper-master/mydumper.c:1028:17: note: each undeclared identifier is reported only once for each function it appears in
  • make[2]: *** [CMakeFiles/mydumper.dir/mydumper.c.o] Error 1
  • make[1]: *** [CMakeFiles/mydumper.dir/all] Error 2
  • make: *** [all] Error 2

rocksdb-support

I would like to use this package to backup myrocks databases like this:

mydumper -c --regex '^(?!(mysql|#mysql50#.rocksdb))'

But I get the following error:

** (mydumper:16876): CRITICAL **: Error: DB: #mysql50#.rocksdb - Could not execute query: No database selected

gtid_current_pos empty.

using a galera cluster, I notice that my master metadata doesn't always contain GTID which is a problem. Doing some research I found this:

https://jira.mariadb.org/browse/MDEV-10279

looking at this issue, it would seem for galera clusters, wouldn't it be better to use gtid_binlog_pos? And would that need to only for galera, or maybe best to use that on mariadb 10+ and galera? Or does some type of galera detection need to be done and use binlog_pos?

mysql_query(conn,"SHOW MASTER STATUS");
	master=mysql_store_result(conn);
	if (master && (row=mysql_fetch_row(master))) {
		masterlog=row[0];
		masterpos=row[1];
		/* Oracle/Percona GTID */
		if(mysql_num_fields(master) == 5) {
			mastergtid=row[4];
		} else {
			/* Let's try with MariaDB 10.x */
			mysql_query(conn, "SELECT @@gtid_current_pos");
			mdb=mysql_store_result(conn);
			if (mdb && (row=mysql_fetch_row(mdb))) {
				mastergtid=row[0];
			}
		}
	}

Feature Request: Skip Databases greater than arbitrary size

Today, it came up that maybe we would like to have a strategy where databases of a certain size should be excluded from a mydumper backup.

Could you implement a feature that would allow a user to specify --exclude-db-sized=100M for instance (meaning, any db's larger than 100MB)

This is somewhat ambiguous of a feature, but could do something like this to determine if the database is larger than the size:

(assuming bash -- and assuming dbsize is an integer in MB)

SELECT dbname FROM (
SELECT table_schema dbname, Round(Sum(data_length + index_length) / 1024 / 1024, 1) dbsize
FROM   information_schema.tables
GROUP  BY table_schema ) as derived_table WHERE dbsize <= ${dbsize} ;"```

overwrite-tables option should drop and re-create table just before filling it.

I've used myloader on staging to restore and anonymize data from production, but myloader first drop and create all tables, then fills them.

As a result, devs are all waiting for tables to be populated.

I would suggest to drop/create/insert in same thread so we can still access "old" data while myloader is working.

Thanks.

myloader is crashing with invalid default value

/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;

CREATE TABLE `dede_member_space` (
  `mid` mediumint(8) unsigned NOT NULL DEFAULT '0',
  `pagesize` smallint(5) unsigned NOT NULL DEFAULT '10',
  `matt` smallint(6) NOT NULL DEFAULT '0',
  `spacename` varchar(50) NOT NULL DEFAULT '',
  `spacelogo` varchar(50) NOT NULL DEFAULT '',
  `spacestyle` varchar(20) NOT NULL DEFAULT '',
  `sign` varchar(100) NOT NULL DEFAULT '没签名',
  `spacenews` text,
  PRIMARY KEY (`mid`)
) ENGINE=InnoDB DEFAULT CHARSET=gbk;

** (myloader:9484): CRITICAL **: Error restoring xxx.dede_member_space from file xxx.dede_member_space-schema.sql.gz: Invalid default value for 'sign'

mydumper 0.9.3, built against MySQL 5.6.26
myloader 0.9.3, built against MySQL 5.7.19-17

trouble with global sql_big_selects=0

hi:
trouble with MySQL set global variables sql_big_selects=0 and max_join_size;
can add a parameter to customise the session variables to avoid this error??
thanks.

CentOS Error

Got this error when installed from rpm package.
mydumper: error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory

release signatures

Please provide GPG signatures for releases so that they can be verified after download.

How to disable --long-query-guard ?

Hey,

I just don't want the dumper to KILL or to Define any time out for any query. I want it to complete the dump with all queries anyway.

Is there any solution to disable --long-query-guard and Also to complete all queried while dumping regardless of the time they take?

Avoid 'Could not read data from db.table: Table definition has changed, please retry transaction' errors

MyDumper might get the following error in certain scenarios:

** Message: Thread 2 dumping data for `db`.`table`
** (mydumper:11031): CRITICAL **: Could not read data from db.table: Table definition has changed, please retry transaction

Tests

Let's do some tests to see when in what situation we get this error...

Environment

tests were done on the following env:

session2> select @@global.tx_isolation;
+-----------------------+
| @@global.tx_isolation |
+-----------------------+
| REPEATABLE-READ       |
+-----------------------+
1 row in set (0.00 sec)

session2> select version();
+-----------------+
| version()       |
+-----------------+
| 5.6.31-77.0-log |
+-----------------+
1 row in set (0.00 sec)

1. table already exists before backup starts and we read from it:

session1> create table t (a int);
Query OK, 0 rows affected (0.01 sec)

session2> start transaction with consistent snapshot;
Query OK, 0 rows affected (0.00 sec)

session2> select * from t;
Empty set (0.00 sec)

session1> drop table t;
...

session2> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| Id | User | Host      | db   | Command | Time | State                           | Info             | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| 23 | root | localhost | test | Query   |    0 | init                            | show processlist |         0 |             0 |
| 24 | root | localhost | test | Query   |    1 | Waiting for table metadata lock | drop table t     |         0 |             0 |
+----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
2 rows in set (0.00 sec)

The drop table will wait until this transaction has finished (metadatalock)

2: table already exists before backup starts and we did not yet read from it:

session1> create table t (a int);
Query OK, 0 rows affected (0.01 sec)

session2> start transaction with consistent snapshot;
Query OK, 0 rows affected (0.00 sec)

session1> drop table t;
Query OK, 0 rows affected (0.00 sec)

session2> select * from t;
ERROR 1146 (42S02): Table 'test.t' doesn't exist

The drop table will just work, selecting from t will just show as if the table does not exist

3: table didn't exist before backup and we wanted to read from it (backup it):

session2> start transaction with consistent snapshot;
Query OK, 0 rows affected (0.00 sec)

session1> create table t (a int);
Query OK, 0 rows affected (0.00 sec)

session2> select * from t;
ERROR 1412 (HY000): Table definition has changed, please retry transaction

This is what fails, the table didn't exist before the backup started.

4: table existed before backup started and we altered the table before trying to backup the table

session1> create table t (a int);
Query OK, 0 rows affected (0.01 sec)

session2> start transaction with consistent snapshot;
Query OK, 0 rows affected (0.00 sec)

session1> alter table t change a a bigint;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

session2> select * from t;
ERROR 1412 (HY000): Table definition has changed, please retry transaction

This also fails, the alter happened during the transaction and there was no metadatalock on it yet so it succeeded

5: table existed before backup started and we altered the table after opening the table before backupping it


session1> create table t (a int);
Query OK, 0 rows affected (0.01 sec)

session2> start transaction with consistent snapshot;
Query OK, 0 rows affected (0.00 sec)

session2> select * from t;
Empty set (0.00 sec)


session1> alter table t change a a bigint;
...

session2> show processlist;
+----+------+-----------+------+---------+------+---------------------------------+---------------------------------+-----------+---------------+
| Id | User | Host      | db   | Command | Time | State                           | Info                            | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+---------------------------------+---------------------------------+-----------+---------------+
| 23 | root | localhost | test | Query   |    0 | init                            | show processlist                |         0 |             0 |
| 24 | root | localhost | test | Query   |    8 | Waiting for table metadata lock | alter table t change a a bigint |         0 |             0 |
+----+------+-----------+------+---------+------+---------------------------------+---------------------------------+-----------+---------------+
2 rows in set (0.00 sec)

When we access the table first, a metadatalock is taken and the alter will not succeed.

Summary

As we see in the different tests, the error happens in 2 occasions:

  • When the table did not exist before the transaction was started
  • When the table was altered/dropped before any transaction accessed it

Suggested Fix --metadata-locks

In order to prevent this, I suggest to add an option --metadata-locks and ensure the main mydumper transaction, which remains active for the duration of the backup (if there's such thing in mydumper) to

  • gather a list of all tables and keep it as method to figure out which tables to backup (to not backup tables which are created after the backup started)
  • open all tables as soon as possible, taking metadatalocks (not including the schema's/tables that don't match the filters)

Benefits

This will avoid any ALTER/DROP table on existing tables and allow mydumper to remain a consistent snapshot of the dataset.

This should prevent any 1412 errors except that you might get them during gathering of the list of tables and opening tables, where mydumper can just retry the whole process)

This is also better than doing FLUSH TABLE WITH READ LOCK as this only takes metadatalocks and does not prevent reads/writes but only schema changes.

Additional Features

timeout metadatalocks

Maybe as addition there should be functionality to determine if a backup should abort if it holds metadatalocks too long and starts causing too much impact on the environment.

kill schema changes

Opposite to the 'timeout metadatalocks' feature, killing schema changes that wait on metadatalocks automatically by mydumper might also be desired as this would reduce impact.

However, don't think too much about it, this just breaks many things, especially when run on a slave, but just might be desired in some cases.

Drawbacks

As drawback, this actually avoids any schema changes during the backup process and if schema changes happen, subsequent queries on those tables will be locked until the backup is complete.

This can be serious and other options such as --use-savepoints are there to reduce impact on metadatalocking, but if you really want a consistent backup, it's what we need to do IMO.

Master and Server data stored in same medatdata file, difficult to distinguish

The format of the master and slave status in the "metadata" file is very similar. This can cause confusion: I wrote tools to read metadata for the slave status, and reload it for a replicated host, and ran into trouble because of the similarity.

SHOW MASTER STATUS:
Log: mysql-bin.000001
Pos: 100
GTID:(null)

SHOW SLAVE STATUS:
Host: slavehost.localdomain
Log: mysql-bin.1000000
Pos: 2000000
GTID:(null)

This makes it a bit awkward to post. Please switch the slave "Log:" and "Pos:" fields to match what MySQL actually reports, namely:

  • mysql -e 'show slave status\G' | grep Master_log
    Master_Log_File: mysql-bin.1000000
    Read_Master_Log_Pos: 2000000

--logfile option is not working with absolute path of the file.

Hey,

I am trying to store logs under a particular directory where it's dumping the data but I am not able to do so. Although it works if I use the direct filename and the file exists on the path from where I am trying to execute the script.

db_dumper.sh: line 24: --logfile=/opt/platform/newone/17-07-26-10-31/db_dumper.log: No such file or directory

Error in matching when using myloader -s

Max,
I just discovered a serious error in myloader. Suppose I want to reload a single schema using myloader:

myloader -oed /dumps/nightly -s schema1

This is fine as long as I only have schema0 through schema9. But if I also have a schema11 and schema13, myloader will also try to load all of the schema11 and schema13 data into schema1.

It seems fairly clear that what's happening here is myloader is looking for /dumps/schemaname* files when it should be looking for /dumpdir/schemaname.* files.

mydumper 0.9.3 fails to build with MariaDB 10.2

Due to https://jira.mariadb.org/browse/MDEV-13773 this compilation error is printed when building with MariaDB 10.2.9:

# make
Scanning dependencies of target myloader
[ 12%] Building C object CMakeFiles/myloader.dir/myloader.c.o
/tmp/1/mydumper-0.9.3/myloader.c: In function 'main':
/tmp/1/mydumper-0.9.3/myloader.c:117:61: error: 'MYSQL_SERVER_VERSION' undeclared (first use in this function)
   g_print("myloader %s, built against MySQL %s\n", VERSION, MYSQL_SERVER_VERSION);
                                                             ^
/tmp/1/mydumper-0.9.3/myloader.c:117:61: note: each undeclared identifier is reported only once for each function it appears in
CMakeFiles/myloader.dir/build.make:62: recipe for target 'CMakeFiles/myloader.dir/myloader.c.o' failed
make[2]: *** [CMakeFiles/myloader.dir/myloader.c.o] Error 1
CMakeFiles/Makefile2:99: recipe for target 'CMakeFiles/myloader.dir/all' failed
make[1]: *** [CMakeFiles/myloader.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

It is because MYSQL_SERVER_VERSION is not included. This works with both MariaDB 10.1 and MariaDB 10.2:

--- a/mydumper.c	2017-09-07 14:53:02.000000000 +0000
+++ b/mydumper.c	2017-10-06 17:39:34.962315949 +0000
@@ -22,6 +22,9 @@
 #define _FILE_OFFSET_BITS 64
 
 #include <mysql.h>
+#if defined(MARIADB_BASE_VERSION) && defined(MARIADB_VERSION_ID)
+	#include <server/mysql_version.h>
+#endif
 #include <unistd.h>
 #include <stdio.h>
 #include <string.h>
--- a/myloader.c	2017-09-07 14:53:02.000000000 +0000
+++ b/myloader.c	2017-10-06 17:39:28.962315949 +0000
@@ -19,6 +19,9 @@
 #define _FILE_OFFSET_BITS 64
 
 #include <mysql.h>
+#if defined(MARIADB_BASE_VERSION) && defined(MARIADB_VERSION_ID)
+	#include <server/mysql_version.h>
+#endif
 #include <unistd.h>
 #include <stdio.h>
 #include <string.h>

mydumper continuous dump error

Launch dump with
user@comp:~/backup/mysql$ mydumper --daemon --snapshot-interval=1440 --logfile=/home/user/backup/mysql/mydumper.log -h localhost -u root -p 'password' -t 2 -v 3

I get this error
*** Error in `mydumper': double free or corruption (out): 0x00007f32a8002160 ***

System: ubuntu 16 04
mydumper from apt and from source give same error.

Pls help, how to do right continuous dump.

PS: full backup work corectly

UPD: after this command I get

echo $?
0

And export dir is not empty.
Last line in logfile:

2017-01-03 12:41:22 [INFO] - Finished dump at: 2017-01-03 12:41:22

So it is not error?

Cannot create a JSON value from a string with CHARACTER SET 'binary'.

Here is how I create a JSON CHARSET error (Server version: 5.7.17-0ubuntu0.16.04.1)

Create the DB

CREATE DATABASE mydumper_love;
USE mydumper_love;
CREATE TABLE `love_this_project` (id int unsigned not null, json_data json not null);
INSERT INTO `love_this_project` VALUES (1,"{\"show\": false, \"value\": \"google store url\"}");

Dump the DB

mydumper <insert credentials here> -B mydumper_love -o ./mydumper_love

Next Load the DB

myloader <insert credentials here> -o -d mydumper_love/ mydumper_love

Error restoring mydumper_love.love_this_project from file mydumper_love.love_this_project.sql: Cannot create a JSON value from a string with CHARACTER SET 'binary'.

Am I doing something wrong as to why my JSON columns aren't working?

I did find this:
http://stackoverflow.com/questions/38078119/mysql-5-7-12-import-cannot-create-a-json-value-from-a-string-with-character-set

Error switching to database whilst restoring table

mydumper:
mydumper -u root -S /tmp/mysql.sock -p xxxx --regex '^(?!(mysql|information_schema|test))' -o /usr/local/backup/xxxx --less-locking

myloader:
myloader -d /usr/local/backup/xxxx -u root -p xxxx -S /tmp/mysql.sock.3310 -o

error:
** (myloader:7648): CRITICAL **: Error switching to database whilst restoring table xxxx

What causes?

myloader handling or not handling errors?

@maxbube,

I've started using mydumper on a project with very large tables and when restoring those tables that has many file chunks on disk (I use -r on mydumper), myloader is firing the below messages:

** (myloader:14957): CRITICAL **: Error restoring sakila.rental from file sakila.rental.00001.sql: Lock wait timeout exceeded; try restarting transaction

But, instead of cancel the restore, I just let it to complete and I'm seeing no issues on the final results, as the table has the same number of rows as before (this is a test env):

MariaDB [(none)]> select count(*) from sakila.rental;
+----------+
| count(*) |
+----------+
|  4061508 |
+----------+
1 row in set (1.92 sec)

[root@mydumper01 mydumper]# myloader -d . -B sakila -t 10 -q 100 -o --verbose 3
** Message: 10 threads created
** Message: Dropping table or view (if exists) `sakila`.`rental`
** Message: Creating table `sakila`.`rental`
** Message: Thread 7 restoring `sakila`.`rental` part 0
** Message: Thread 2 restoring `sakila`.`rental` part 2
** Message: Thread 8 restoring `sakila`.`rental` part 3
** Message: Thread 9 restoring `sakila`.`rental` part 4
** Message: Thread 6 restoring `sakila`.`rental` part 1
** Message: Thread 10 shutting down
** Message: Thread 4 shutting down
** Message: Thread 5 shutting down
** Message: Thread 3 shutting down
** Message: Thread 1 shutting down

** (myloader:14957): CRITICAL **: Error restoring sakila.rental from file sakila.rental.00001.sql: Lock wait timeout exceeded; try restarting transaction
** Message: Thread 6 shutting down
** Message: Thread 9 shutting down
^@** Message: Thread 8 shutting down
** Message: Thread 2 shutting down

** (myloader:14957): CRITICAL **: Error restoring sakila.rental from file sakila.rental.sql: Duplicate entry '1221716' for key 'PRIMARY'
** Message: Thread 7 shutting down

MariaDB [(none)]> select count(*) from sakila.rental;
+----------+
| count(*) |
+----------+
|  4061508 |
+----------+
1 row in set (1.97 sec)

Could you tell if it's expected and if https://bugs.launchpad.net/mydumper/+bug/806698 is related to this? Thanks a lot.

Add complete-insert feature

Why there is no equivalent of mysqldump --complete-insert feature? It is necessary when moving data between databases with different structure. I have seen a patch provided early in 2014, however looks like it has never been merged.

Option in file and not on CLI ?

Hi,

I've found this excellent tool to backup my database.

However, to make a dump, I have to specify password on the command line.

Is there any configuration file that mydumper could read ?

Regards,

How can I record myloader log?

@maxbube
When I use myloader to loader the dumps, I add this job into a crontab,but I can't record the logs.Can you help me ?
I recommend adding a parameter -L like mydumper to record the myloader's logs.

Docker?

Would a Docker installation be within the scope of this project? I'd be happy to contribute code, and being able to spin up a Docker container without having to do a native install may make the project more accessible to use. Thanks for your work on this project!

Feature Request: Optimal Index Creation

Percona has an interesting trick for improving SQL dump restores0. It removes the secondary keys from the table schema, restores the backup, and then does an ALTER TABLE ADD KEY for each of the secondary keys.

This has some restore performance, and key compactness improvements.

  • Tables load more quickly because they do not need to keep secondary keys up to date while the primary key data is being written
  • Secondary keys are created in a very optimal/compact way because all of the column data is available at the time of key block creation.

It would be nice if mydumper would create two schema files per table, one for the primary table schema, and one for the secondary keys.

Then myloader could process the table loading and apply the secondary key schema changes after all primary key rows are restored.

mydumper does not escape characters

mydumper creates an sql file where data containing special symbols is not escaped. Restore therefore fails with something similar to:

** (myloader:5758): CRITICAL **: Error restoring dbname.tablename from file dbname.tablename.sql: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'g1200" where P_NAME = "zdbMYSQL_DTEST.user"","20120802111908787","20120802111' at line 2

mysqldump creates a proper dump of this table:
..."zdbMYSQL_DTEST.user"',...

Need to manually symlink the libraries

Hi,

For mydumper to compile I needed to manually symlink the libraries:
[root@dbtst1 mydumper-0.9.1]# history|grep ln
648 ln -s /usr/lib64/libpthread.so.0 /usr/lib64/libpthread.so
658 ln -s /usr/lib64/libm.so.6 /usr/lib64/libm.so
667 ln -s /usr/lib64/libdl.so.2 /usr/lib64/libdl.so

The other mysql libraries were detected.

I don't know if this is mydumper bug or MariaDB packaging bug, reporting on both places. I tried 0.9.1 and master branch from github.

I am running MariaDB 10.1.21 from the community edition repo:
[root@dbtst1 mydumper-0.9.1]# rpm -qa|grep -i mariadb
MariaDB-common-10.1.21-1.el7.centos.x86_64
MariaDB-shared-10.1.21-1.el7.centos.x86_64
MariaDB-devel-10.1.21-1.el7.centos.x86_64
MariaDB-compat-10.1.21-1.el7.centos.x86_64
MariaDB-client-10.1.21-1.el7.centos.x86_64

On this box, I reinstalled various versions of various libraries (rhel mariadb, mariadb 10.1.9, percona 5.6, percona 5.7) a couple of times.

I hope this can be fixed, as said I don't know which project is to blame.

Thanks!

bug with table name

MariaDB [load_test]> show tables;
+---------------------+
| Tables_in_load_test |
+---------------------+
| /MAR/B              |
| MAR/B               |
| MARA                |
| MARC                |
| VBAK                |
| VBUP                |
| ZMM_PRICE_SENT      |
| article             |
| article2            |
| article3            |
| asset               |
| datamodel           |
| entity              |
| jsampall            |
| jsample             |
| offre               |
+---------------------+
16 rows in set (0.00 sec)

I know it's a bit tricky :), but mysql allow this kind of table name.

** (mydumper:21690): CRITICAL **: Error: DB: load_test TABLE: /MAR/B Could not create output file export-20170310-085025/load_test./MAR/B.sql (2)

** (mydumper:21690): CRITICAL **: Error: DB: load_test TABLE: MAR/B Could not create output file export-20170310-085025/load_test.MAR/B.sql (2)

** (mydumper:21690): CRITICAL **: Error: DB: load_test Could not create output file export-20170310-085025/load_test./MAR/B-schema.sql (2)

** (mydumper:21690): CRITICAL **: Error: DB: load_test Could not create output file export-20170310-085025/load_test.MAR/B-schema.sql (2)

cmake err

$ cmake .
-- Using mysql-config: /data/mysql/base/bin/mysql_config
-- Found MySQL: /data/mysql/base/include, /data/mysql/base/lib/libmysqlclient.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/librt.so;MYSQL_LIBRARIES_atomic-NOTFOUND;/usr/lib/x86_64-linux-gnu/libdl.so

CMake Warning at docs/CMakeLists.txt:9 (message):
Unable to find Sphinx documentation generator


-- MYSQL_CONFIG = /data/mysql/base/bin/mysql_config
-- CMAKE_INSTALL_PREFIX = /usr/local
-- BUILD_DOCS = ON
-- WITH_BINLOG = OFF
-- RUN_CPPCHECK = OFF
-- Change a values with: cmake -D=


--
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
MYSQL_LIBRARIES_atomic
linked by target "mydumper" in directory /data/mydumper
linked by target "myloader" in directory /data/mydumper

-- Configuring incomplete, errors occurred!
See also "/data/mydumper/CMakeFiles/CMakeOutput.log".

why??

compress and rsyncable

When using the --compress option, is it possible to create the gz files using the --rsyncable option of gzip? When I run mysqldump, I always pipe it to gzip --rsyncable like:

mysqldump ... | gzip --rsyncable > backup.sql.gz

Thanks for your help.

Names with dot are incorrectly restored by myloader

Hi!

Just found a problem when trying to dump/restore a DB host with databases containing dot in their names.

Let's say I have a database called 'db1.project1', inside I have a table 'table'. mydumper will properly dump everything resulting in a file name 'db1.project1.table-schema.sql'.

But when myloader tries to restore such file, it just creates database with name 'db1', not 'db1.project1'.

I think problem is here:
https://github.com/maxbube/mydumper/blob/master/myloader.c#L355

SSL connection Error

I am using ubuntu 16.04, clean install running apt-get 0.9.1 and then also tried compiling 0.9.2 from here. I get this error:

** (mydumper:13381): CRITICAL **: Error connecting to database: SSL connection error: unknown error number

If I use the mysql or mysqldump client, I have no issues, and can use --ssl=1 or --ssl=0 without a problem. I don't see any other information to help me track this down.

Thank you.

segmentation error / segfault during usage

open("/usr/lib/x86_64-linux-gnu/charset.alias", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26258, ...}) = 0
mmap(NULL, 26258, PROT_READ, MAP_SHARED, 3, 0) = 0x7fc8973da000
close(3)                                = 0
futex(0x7fc8958d98f8, FUTEX_WAKE_PRIVATE, 2147483647) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0} ---
+++ killed by SIGSEGV +++
Segmentation fault
[1738571.693428] mydumper[13284]: segfault at 0 ip 00007f096f1ee81d sp 00007fff65cb6000 error 6 in libglib-2.0.so.0.4200.1[7f096f19b000+10c000]
$ mydumper --version
mydumper 0.9.1, built against MySQL 5.7.17
mydumper \
  --database foobar \
  --outputdir ./ \
  --tz-utc \
  --host 10.0.0.1 \
  --user root \
  --password foobar \
  --port 3306 \
  --threads 4 \
  --compress-protocol

I can't specify multiple database

I have db1,db2,db3.
mydumper -o /tmp/backup/ -B db1 -B db2 -B db3
It was not work option -B about serveral db.
Can I specify to dump serveral db not all database?

mydumper consumes 100% CPU if disk is full

Heyo - recently stumbled across mydumper and it looks great. Thanks for putting it out there.

Just ran into an issue:

I was running a mydumper job and the disk ran out of space. Instead of exiting or emiting error messages, the process hung around at 100% CPU forever (the /proc/*/stack file indicated it was waiting on futex_wait_queue_me, perhaps all the threads are spinning or something).

For posterity here's the command being run:

./mydumper/mydumper --version
mydumper 0.9.2, built against MySQL 5.5.54

./mydump/mydumper \
    --triggers --events --routines \
    --no-locks \
    --long-query-guard 31536000 \
    --compress  --compress-protocol \
    --host "$DB_HOST" \
    --user "$DB_USER" \
    --password "$DB_PASS" \
    --rows 500000 \
    --build-empty-files \
    --threads 32 \
    --outputdir "$OUTPUT_DIR" \
    --verbose 3 \
    > "$OUTPUT_DIR".output.log

This was compiled from source commit f79c3e2a using Amazon Linux AMI release 2014.09.

Not a huge issue but it did take some time to figure out that the disk was full and that was probably the cause.

Hope this helps.

Table list and regex excluding each other?

Hi,

When executing this command, no dump is done:
mydumper -c -B dbname -o /tmp/dbname -T table1,table2,table3 --regex "^dbname.table4"

By removing regex or tables list option, dump is done correctly.
Mydumper version is mydumper 0.9.1, built against MySQL 5.5.44-MariaDB

In mydumper help page, tables list option says:
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)

So, my question is simple.
Can we use both options to include in dump some tables via list and regex or both optiones are really excluding each other?
Is it a bug?

Regards.

output the resulting SQL to stdout

If possible (maybe at the user-defined price of reducing parallelism?), it would be more "pipe-friendly" facilitating ssh and more generally remote-use.

mydumper doesn't compile unmodified on Ubuntu 16.04

The default generate makefile doesn't compile on Ubuntu 16.04. It complains of:

root@mysql2:/usr/local/src/mydumper# make
[ 16%] Linking C executable mydumper
/usr/bin/ld: CMakeFiles/mydumper.dir/mydumper.c.o: undefined reference to symbol 'ceilf@@GLIBC_2.2.5'
//lib/x86_64-linux-gnu/libm.so.6: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
CMakeFiles/mydumper.dir/build.make:151: recipe for target 'mydumper' failed
make[2]: *** [mydumper] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/mydumper.dir/all' failed
make[1]: *** [CMakeFiles/mydumper.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

Modifying CMakeLists.txt and adding -lm to the target_link_libraries() lines works:

target_link_libraries(mydumper ${MYSQL_LIBRARIES} ${GLIB2_LIBRARIES} ${GTHREAD2_LIBRARIES} ${PCRE_PCRE_LIBRARY} ${ZLIB_LIBRARIES} -lm)

I'm not up to speed with cmake to do this change properly and submit a pull request but hopefully it's enough here for someone to do that legwork.

Thanks!

Double free or corruption

It seems that this command

mydumper -u user -p xxxxxxx -h localhost -B BD_NAME -G -e -E -R -D -v 3 --logfile=dump.log --use-savepoints

In the following mydumper version mydumper/xenial,now 0.9.1-1build1 amd64. generated the following error in ubuntu 16.04 -- 4.4.0-64-generic.

*** Error in `mydumper': double free or corruption (out): 0x00007f6a4c002160 ***                         ======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f6a52df17e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x7fe0a)[0x7f6a52df9e0a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f6a52dfd98c]
mydumper(exec_thread+0x45)[0x40a975]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x70bb5)[0x7f6a53625bb5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f6a53df06ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a52e8082d]
======= Memory map: ========
00400000-00411000 r-xp 00000000 fd:01 60794                              /usr/bin/mydumper
00610000-00611000 r--p 00010000 fd:01 60794                              /usr/bin/mydumper
00611000-00612000 rw-p 00011000 fd:01 60794                              /usr/bin/mydumper
01b5d000-01b8f000 rw-p 00000000 00:00 0                                  [heap]
7f6a38000000-7f6a38160000 rw-p 00000000 00:00 0
7f6a38160000-7f6a3c000000 ---p 00000000 00:00 0
7f6a3c000000-7f6a3c10f000 rw-p 00000000 00:00 0
7f6a3c10f000-7f6a40000000 ---p 00000000 00:00 0
7f6a437ff000-7f6a43800000 ---p 00000000 00:00 0
7f6a43800000-7f6a44000000 rw-p 00000000 00:00 0
7f6a44000000-7f6a4410f000 rw-p 00000000 00:00 0
7f6a4410f000-7f6a48000000 ---p 00000000 00:00 0
7f6a48000000-7f6a484ad000 rw-p 00000000 00:00 0
7f6a484ad000-7f6a4c000000 ---p 00000000 00:00 0
7f6a4c000000-7f6a4c029000 rw-p 00000000 00:00 0
7f6a4c029000-7f6a50000000 ---p 00000000 00:00 0
7f6a503c8000-7f6a503c9000 ---p 00000000 00:00 0
7f6a503c9000-7f6a50bc9000 rw-p 00000000 00:00 0
7f6a50bc9000-7f6a50bca000 ---p 00000000 00:00 0
7f6a50bca000-7f6a513ca000 rw-p 00000000 00:00 0
7f6a513ca000-7f6a513cb000 ---p 00000000 00:00 0
7f6a513cb000-7f6a51bcb000 rw-p 00000000 00:00 0
7f6a51bcb000-7f6a51bd6000 r-xp 00000000 fd:01 23031                      /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6a51bd6000-7f6a51dd5000 ---p 0000b000 fd:01 23031                      /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6a51dd5000-7f6a51dd6000 r--p 0000a000 fd:01 23031                      /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6a51dd6000-7f6a51dd7000 rw-p 0000b000 fd:01 23031                      /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6a51dd7000-7f6a51ddd000 rw-p 00000000 00:00 0
7f6a51ddd000-7f6a51dde000 ---p 00000000 00:00 0
7f6a51dde000-7f6a525de000 rw-p 00000000 00:00 0
7f6a525de000-7f6a525f4000 r-xp 00000000 fd:01 2021                       /lib/x86_64-linux-gnu/libgcc_s.so.1
7f6a525f4000-7f6a527f3000 ---p 00016000 fd:01 2021                       /lib/x86_64-linux-gnu/libgcc_s.so.1
7f6a527f3000-7f6a527f4000 rw-p 00015000 fd:01 2021                       /lib/x86_64-linux-gnu/libgcc_s.so.1
7f6a527f4000-7f6a52966000 r-xp 00000000 fd:01 8030                       /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7f6a52966000-7f6a52b66000 ---p 00172000 fd:01 8030                       /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7f6a52b66000-7f6a52b70000 r--p 00172000 fd:01 8030                       /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7f6a52b70000-7f6a52b72000 rw-p 0017c000 fd:01 8030                       /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7f6a52b72000-7f6a52b76000 rw-p 00000000 00:00 0
7f6a52b76000-7f6a52b79000 r-xp 00000000 fd:01 23019                      /lib/x86_64-linux-gnu/libdl-2.23.so
7f6a52b79000-7f6a52d78000 ---p 00003000 fd:01 23019                      /lib/x86_64-linux-gnu/libdl-2.23.so
7f6a52d78000-7f6a52d79000 r--p 00002000 fd:01 23019                      /lib/x86_64-linux-gnu/libdl-2.23.so
7f6a52d79000-7f6a52d7a000 rw-p 00003000 fd:01 23019                      /lib/x86_64-linux-gnu/libdl-2.23.so
7f6a52d7a000-7f6a52f39000 r-xp 00000000 fd:01 23020                      /lib/x86_64-linux-gnu/libc-2.23.so
7f6a52f39000-7f6a53139000 ---p 001bf000 fd:01 23020                      /lib/x86_64-linux-gnu/libc-2.23.so
7f6a53139000-7f6a5313d000 r--p 001bf000 fd:01 23020                      /lib/x86_64-linux-gnu/libc-2.23.so
7f6a5313d000-7f6a5313f000 rw-p 001c3000 fd:01 23020                      /lib/x86_64-linux-gnu/libc-2.23.so
7f6a5313f000-7f6a53143000 rw-p 00000000 00:00 0
7f6a53143000-7f6a531b1000 r-xp 00000000 fd:01 2116                       /lib/x86_64-linux-gnu/libpcre.so.3.13.2
7f6a531b1000-7f6a533b1000 ---p 0006e000 fd:01 2116                       /lib/x86_64-linux-gnu/libpcre.so.3.13.2
7f6a533b1000-7f6a533b2000 r--p 0006e000 fd:01 2116                       /lib/x86_64-linux-gnu/libpcre.so.3.13.2
7f6a533b2000-7f6a533b3000 rw-p 0006f000 fd:01 2116                       /lib/x86_64-linux-gnu/libpcre.so.3.13.2
7f6a533b3000-7f6a533b4000 r-xp 00000000 fd:01 4500                       /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2
7f6a533b4000-7f6a535b3000 ---p 00001000 fd:01 4500                       /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2
7f6a535b3000-7f6a535b4000 r--p 00000000 fd:01 4500                       /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2
7f6a535b4000-7f6a535b5000 rw-p 00001000 fd:01 4500                       /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2
7f6a535b5000-7f6a536c4000 r-xp 00000000 fd:01 4496                       /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2
7f6a536c4000-7f6a538c3000 ---p 0010f000 fd:01 4496                       /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2
7f6a538c3000-7f6a538c4000 r--p 0010e000 fd:01 4496                       /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2
7f6a538c4000-7f6a538c5000 rw-p 0010f000 fd:01 4496                       /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2
7f6a538c5000-7f6a538c6000 rw-p 00000000 00:00 0
7f6a538c6000-7f6a539ce000 r-xp 00000000 fd:01 23025                      /lib/x86_64-linux-gnu/libm-2.23.so
7f6a539ce000-7f6a53bcd000 ---p 00108000 fd:01 23025                      /lib/x86_64-linux-gnu/libm-2.23.so
7f6a53bcd000-7f6a53bce000 r--p 00107000 fd:01 23025                      /lib/x86_64-linux-gnu/libm-2.23.so
7f6a53bce000-7f6a53bcf000 rw-p 00108000 fd:01 23025                      /lib/x86_64-linux-gnu/libm-2.23.so
7f6a53bcf000-7f6a53be8000 r-xp 00000000 fd:01 2142                       /lib/x86_64-linux-gnu/libz.so.1.2.8
7f6a53be8000-7f6a53de7000 ---p 00019000 fd:01 2142                       /lib/x86_64-linux-gnu/libz.so.1.2.8
7f6a53de7000-7f6a53de8000 r--p 00018000 fd:01 2142                       /lib/x86_64-linux-gnu/libz.so.1.2.8
7f6a53de8000-7f6a53de9000 rw-p 00019000 fd:01 2142                       /lib/x86_64-linux-gnu/libz.so.1.2.8
7f6a53de9000-7f6a53e01000 r-xp 00000000 fd:01 23026                      /lib/x86_64-linux-gnu/libpthread-2.23.so
7f6a53e01000-7f6a54000000 ---p 00018000 fd:01 23026                      /lib/x86_64-linux-gnu/libpthread-2.23.so
7f6a54000000-7f6a54001000 r--p 00017000 fd:01 23026                      /lib/x86_64-linux-gnu/libpthread-2.23.so
7f6a54001000-7f6a54002000 rw-p 00018000 fd:01 23026                      /lib/x86_64-linux-gnu/libpthread-2.23.so
7f6a54002000-7f6a54006000 rw-p 00000000 00:00 0
7f6a54006000-7f6a54399000 r-xp 00000000 fd:01 28838                      /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.4
7f6a54399000-7f6a54598000 ---p 00393000 fd:01 28838                      /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.4
7f6a54598000-7f6a5459e000 r--p 00392000 fd:01 28838                      /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.4
7f6a5459e000-7f6a54611000 rw-p 00398000 fd:01 28838                      /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.4
7f6a54611000-7f6a54616000 rw-p 00000000 00:00 0
7f6a54616000-7f6a5463c000 r-xp 00000000 fd:01 23009                      /lib/x86_64-linux-gnu/ld-2.23.so
7f6a54829000-7f6a54831000 rw-p 00000000 00:00 0
7f6a54831000-7f6a54832000 rw-p 00000000 00:00 0
7f6a54832000-7f6a54839000 r--s 00000000 fd:01 22702                      /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache
7f6a54839000-7f6a5483b000 rw-p 00000000 00:00 0
7f6a5483b000-7f6a5483c000 r--p 00025000 fd:01 23009                      /lib/x86_64-linux-gnu/ld-2.23.so
7f6a5483c000-7f6a5483d000 rw-p 00026000 fd:01 23009                      /lib/x86_64-linux-gnu/ld-2.23.so
7f6a5483d000-7f6a5483e000 rw-p 00000000 00:00 0
7ffcfc43d000-7ffcfc45e000 rw-p 00000000 00:00 0                          [stack]
7ffcfc4db000-7ffcfc4dd000 r--p 00000000 00:00 0                          [vvar]
7ffcfc4dd000-7ffcfc4df000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]   

dependencies issues - unable to make with Percona-server-5.7

Dear Colleagues,

I'm trying to build/install with Percona-server-5.7:

Preconditions:
CentOS Linux release 7.4.1708 (Core)
Percona-server-5.7

STR:
yum install glib2-devel mysql-devel zlib-devel pcre-devel openssl-devel
to build

Issue:
System trying to install Percona-Server-devel-56-5.6.37-rel82.2.el7.x86_64 :
Transaction check error:
file /usr/bin/mysql_config from install of Percona-Server-devel-56-5.6.37-rel82.2.el7.x86_64 conflicts with file from package Percona-Server-client-57-5.7.19-17.1.el7.x86_64

Question/Issue:
How to install/build mydumper with percona-5.7 ?

I've also tried RPM from https://twindb.com/mydumper-rpm-for-centosrhel/ - same result.

Thank you for any help you can provide

Error compiling with WITH_BINLOG=ON

mydumper.c: In function ‘binlog_thread’:
mydumper.c:1028:17: error: ‘thrconn’ undeclared (first use in this function)
mysql_options(thrconn,MYSQL_READ_DEFAULT_FILE,defaults_file);
^
mydumper.c:1028:17: note: each undeclared identifier is reported only once for each function it appears in

sql views are dumped as mysql tables

i am not sure if this is a bug, are if it can be avoided by passing cli paramater.
i also think that i recall the same behavior from mysqldump.

for each view, a fake create table statement is being generated in a schema file.
this prevent from the actual schema view definition from being executed.

thank you for your assistance

--edit --
please remove/ignore, this is the same behavior as mysqldump. and shouldn't impose issues

add option to order rows by PK

Would be useful to have mydumper implement the option similar to mysqldump.
Per mysqldump manual:
--order-by-primary

Dump each table's rows sorted by its primary key, or by its first unique index, if such an index exists. This is useful when dumping a MyISAM table to be loaded into an InnoDB table, but makes the dump operation take considerably longer.

Error while dumping from remote host (RDS instance)

Hey,

I am trying to take a dump of a DB inside an RDS instance from mydumper but getting below error. Its not picking up the host but taking the instance private ip.

Command:
mydumper --database=root --host=my-rds-host --user=root --port=3306 --password=PASS --outputdir=/home/ubuntu/mydump --rows=50000 --threads=6 --compress --build-empty-files --compress-protocol

Error:
** (mydumper:20329): CRITICAL **: Error connecting to database: Access denied for user 'root'@'172.31.24.205' (using password: YES)

Don't disclose password

Can we have an option for --ask-password rather than passing the password in with the command, or better yet can --login-path be integrated?

mysql_config_editor set --login-path=instance_13001 --host=localhost --user=root --port=3306 --password
Enter password: <Password is prompted to be inserted in a more secure way>

With the mysql_config_editor example we could then call mydumper like this

mydumper --login-path instance_13001 --database mydb -T mytable --threads 40 --rows 50000

That would be useful when running the process as a daemon or through a cronjob

If using the --ask-password option it would resemble this:

mydumper --user myuser --ask-password --database mydb -T mytable --threads 40 --rows 50000 Enter password: <Password is prompted to be inserted in a more secure way>

This would be more useful in a manual instance.

Then if --ask-password or --login-path are used it won't show the password in the history or in the process lists running

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.