Giter Site home page Giter Site logo

facebook / mysql-5.6 Goto Github PK

View Code? Open in Web Editor NEW
2.4K 226.0 745.0 4.15 GB

Facebook's branch of the Oracle MySQL database. This includes MyRocks.

Home Page: http://myrocks.io

License: Other

Shell 0.60% CMake 0.99% PHP 0.10% C 15.44% C++ 77.19% Perl 0.59% Objective-C 0.77% Pascal 0.15% Python 0.06% Makefile 1.12% Java 1.76% HTML 0.50% Awk 0.01% JavaScript 0.22% CSS 0.01% Assembly 0.01% Batchfile 0.01% Yacc 0.33% Lex 0.01% M4 0.13%

mysql-5.6's Introduction

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

This is a release of MySQL, an SQL database server.

License information can be found in the LICENSE file.

In test packages where this file is renamed README-test, the license
file is renamed LICENSE-test.

This distribution may include materials developed by third parties.
For license and attribution notices for these materials,
please refer to the LICENSE file.

For further information on MySQL or additional documentation, visit
  http://dev.mysql.com/doc/

For additional downloads and the source of MySQL, visit
  http://dev.mysql.com/downloads/

MySQL is brought to you by the MySQL team at Oracle.

mysql-5.6's People

Contributors

alfranio avatar arnabray21 avatar bjornmu avatar bkandasa avatar blaudden avatar dahlerlend avatar frazerclement avatar gkodinov avatar glebshchepa avatar gurusami avatar harinvadodaria avatar jdduncan avatar jhauglid avatar kahatlen avatar kdjakevin avatar lkotula avatar lkshminarayanan avatar ltangvald avatar marcalff avatar nacarvalho avatar nryeng avatar phulakun avatar roylyseng avatar thayumanavar77 avatar thirunarayanan avatar trosten avatar vaintroub avatar vasild avatar weigon avatar zmur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mysql-5.6's Issues

Fix test cases which fail with RocksDB SE

Issue by maykov
Wednesday Feb 25, 2015 at 03:38 GMT
Originally opened as MySQLOnRocksDB#39


Here is the list of tests which fail:
(tests prefixed with # are fixed)

funcs_1.is_columns_mysql : This test fails on RocksDB SE
funcs_1.is_tables_mysql : This test fails on RocksDB SE
innodb.innodb_bug59641 : This test fails on RocksDB SE
innodb.innodb-index-online-fk : This test fails on RocksDB SE
innodb.innodb-system-table-view : This test fails on RocksDB SE
innodb.innodb-tablespace : This test fails on RocksDB SE
innodb_stress.innodb_stress_blob_zipdebug_zlib : This test fails on RocksDB SE
innodb_stress.innodb_stress_mix : This test fails on RocksDB SE
innodb_zip.innodb_16k : This test fails on RocksDB SE
main.bootstrap : This test fails on RocksDB SE
main.connect : This test fails on RocksDB SE
main.hll : This test fails on RocksDB SE
main.innodb_report_age_of_evicted_pages : This test fails on RocksDB SE

main.innodb_snapshot_nobinlog : This test fails on RocksDB SE

main.innodb_snapshot_noinnodb : This test fails on RocksDB SE

main.myisam-blob : This test fails on RocksDB SE

main.mysqlbinlog_gtid : This test fails on RocksDB SE

main.mysqlcheck : This test fails on RocksDB SE

main.mysqld--help-notwin-profiling : This test fails on RocksDB SE
main.mysqld--help-notwin : This test fails on RocksDB SE

main.mysql_embedded : This test fails on RocksDB SE

main.openssl_1 : This test fails on RocksDB SE
main.plugin_auth_qa_1 : This test fails on RocksDB SE
main.plugin_auth_sha256_server_default_tls : This test fails on RocksDB SE
main.plugin_auth_sha256_tls : This test fails on RocksDB SE
main.rocksdb : This test fails on RocksDB SE
main.ssl_8k_key : This test fails on RocksDB SE
main.ssl_cipher : This test fails on RocksDB SE
main.ssl_compress : This test fails on RocksDB SE
main.ssl_connections_count : This test fails on RocksDB SE
main.ssl_connect : This test fails on RocksDB SE
main.ssl : This test fails on RocksDB SE
main.temp_table_cleanup : This test fails on RocksDB SE

main.warnings : This test fails on RocksDB SE

perfschema.aggregate : This test fails on RocksDB SE
perfschema.hostcache_ipv4_auth_plugin : This test fails on RocksDB SE
perfschema.hostcache_ipv6_auth_plugin : This test fails on RocksDB SE
perfschema.no_threads : This test fails on RocksDB SE
perfschema.pfs_upgrade_event : This test fails on RocksDB SE
perfschema.pfs_upgrade_func : This test fails on RocksDB SE
perfschema.pfs_upgrade_proc : This test fails on RocksDB SE
perfschema.pfs_upgrade_table : This test fails on RocksDB SE
perfschema.pfs_upgrade_view : This test fails on RocksDB SE
perfschema.start_server_disable_idle : This test fails on RocksDB SE
perfschema.start_server_disable_stages : This test fails on RocksDB SE
perfschema.start_server_disable_statements : This test fails on RocksDB SE
perfschema.start_server_disable_waits : This test fails on RocksDB SE
perfschema.start_server_innodb : This test fails on RocksDB SE
perfschema.start_server_no_account : This test fails on RocksDB SE
perfschema.start_server_no_cond_class : This test fails on RocksDB SE
perfschema.start_server_no_cond_inst : This test fails on RocksDB SE
perfschema.start_server_no_file_class : This test fails on RocksDB SE
perfschema.start_server_no_file_inst : This test fails on RocksDB SE
perfschema.start_server_no_host : This test fails on RocksDB SE
perfschema.start_server_no_mutex_class : This test fails on RocksDB SE
perfschema.start_server_no_mutex_inst : This test fails on RocksDB SE
perfschema.start_server_no_rwlock_class : This test fails on RocksDB SE
perfschema.start_server_no_rwlock_inst : This test fails on RocksDB SE
perfschema.start_server_no_setup_actors : This test fails on RocksDB SE
perfschema.start_server_no_setup_objects : This test fails on RocksDB SE
perfschema.start_server_no_socket_class : This test fails on RocksDB SE
perfschema.start_server_no_socket_inst : This test fails on RocksDB SE
perfschema.start_server_no_stage_class : This test fails on RocksDB SE
perfschema.start_server_no_stages_history_long : This test fails on RocksDB SE
perfschema.start_server_no_stages_history : This test fails on RocksDB SE
perfschema.start_server_no_statement_class : This test fails on RocksDB SE
perfschema.start_server_no_statements_history_long : This test fails on RocksDB SE
perfschema.start_server_no_statements_history : This test fails on RocksDB SE
perfschema.start_server_no_table_hdl : This test fails on RocksDB SE
perfschema.start_server_no_table_inst : This test fails on RocksDB SE
perfschema.start_server_nothing : This test fails on RocksDB SE
perfschema.start_server_no_thread_class : This test fails on RocksDB SE
perfschema.start_server_no_thread_inst : This test fails on RocksDB SE
perfschema.start_server_no_user : This test fails on RocksDB SE
perfschema.start_server_no_waits_history_long : This test fails on RocksDB SE
perfschema.start_server_no_waits_history : This test fails on RocksDB SE
perfschema.start_server_off : This test fails on RocksDB SE
perfschema.start_server_on : This test fails on RocksDB SE
rpl.rpl_alter_repository : This test fails on RocksDB SE
rpl.rpl_change_master_crash_safe : This test fails on RocksDB SE
rpl.rpl_dynamic_ssl : This test fails on RocksDB SE
rpl.rpl_gtid_crash_safe : This test fails on RocksDB SE
rpl.rpl_heartbeat_ssl : This test fails on RocksDB SE
rpl.rpl_innodb_bug68220 : This test fails on RocksDB SE
rpl.rpl_master_connection : This test fails on RocksDB SE
rpl.rpl_row_crash_safe : This test fails on RocksDB SE
rpl.rpl_ssl1 : This test fails on RocksDB SE
rpl.rpl_ssl : This test fails on RocksDB SE
rpl.rpl_stm_mixed_mts_rec_crash_safe_small : This test fails on RocksDB SE
sys_vars.all_vars : This test fails on RocksDB SE

[CLOSED] Support index-only scans for DATETIME, TIMESTAMP, and DOUBLE

Issue by spetrunia
Monday Feb 02, 2015 at 18:39 GMT
Originally opened as MySQLOnRocksDB#26


Currently, index scans for DATETIME, TIMESTAMP, and DOUBLE are not supported

Testcase:

create table t31 (pk int auto_increment primary key, key1 double, key(key1)) engine=rocksdb;
insert into t31 values (),(),(),(),(),(),(),();
explain select key1 from t31 where key1=1.234;

create table t32 (pk int auto_increment primary key, key1 datetime, key(key1))engine=rocksdb;
insert into t32 values (),(),(),(),(),(),(),();
explain select key1 from t32 where key1='2015-01-01 00:11:12';

create table t33 (pk int auto_increment primary key, key1 timestamp, key(key1))engine=rocksdb;
insert into t33 values (),(),(),(),(),(),(),();
explain select key1 from t33 where key1='2015-01-01 00:11:12';

This task is about to support them.
DATETIME/TIMESTAMP use Field_temporal_with_date_and_timef::make_sort_key, which just does memcpy().
DOUBLE uses change_double_for_sort(), we will need to code a reverse function.

Support index-only scans for type DOUBLE

Issue by spetrunia
Friday Apr 24, 2015 at 00:50 GMT
Originally opened as MySQLOnRocksDB#56


(branching this off from issue #26)

Currently, index-only scans are not supported for column type DOUBLE.
Testcase:

create table t31 (pk int auto_increment primary key, key1 double, key(key1)) engine=rocksdb;
insert into t31 values (),(),(),(),(),(),(),();
explain select key1 from t31 where key1=1.234;

It is actually possible to restore double from its mem-comparable form and thus support index-only scans.
See filesort.cc: void change_double_for_sort(double nr,uchar *to) for the code that needs to be inverted.

Add more information to SHOW ENGINE ROCKSDB TRANSACTION STATUS

Issue by jkedgar
Friday Sep 04, 2015 at 17:29 GMT
Originally opened as MySQLOnRocksDB#106


Mark Callaghan requested the following pieces of information to be available through SHOW ENGINE ROCKSDB TRANSACTION STATUS:

  • number of row locks per transaction
  • amount of memory used to buffer row changes prior to commit per transaction
  • number of uncommitted row changes per transaction

These may depend on new features in RocksDB.

How to solve the error when compiling

Issue by MrDimension
Thursday May 28, 2015 at 03:00 GMT
Originally opened as MySQLOnRocksDB#73


The error occurred during "make" process. And I got the error message below:
// ==========================================================
/u01/weidu.lww/myrocks/vio/viosslfactories.c: In function ‘new_VioSSLFd’:
/u01/weidu.lww/myrocks/vio/viosslfactories.c:266:3: warning: implicit declaration of function ‘ERR_clear_error’ [-Wimplicit-function-declaration]
ERR_clear_error();
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:5: error: unknown type name ‘EC_KEY’
EC_KEY ecdh = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:5: warning: implicit declaration of function ‘EC_KEY_new_by_curve_name’ [-Wimplicit-function-declaration]
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:45: error: ‘NID_X9_62_prime256v1’ undeclared (first use in this function)
EC_KEY *ecdh = EC_KEY_new_by_curve_name(NID_X9_62_prime256v1);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:307:45: note: each undeclared identifier is reported only once for each function it appears in
/u01/weidu.lww/myrocks/vio/viosslfactories.c:309:7: warning: implicit declaration of function ‘SSL_CTX_set_tmp_ecdh’ [-Wimplicit-function-declaration]
SSL_CTX_set_tmp_ecdh(ssl_fd->ssl_context, ecdh);
^
/u01/weidu.lww/myrocks/vio/viosslfactories.c:310:7: warning: implicit declaration of function ‘EC_KEY_free’ [-Wimplicit-function-declaration]
EC_KEY_free(ecdh);
^
make[2]: *
* [vio/CMakeFiles/vio.dir/viosslfactories.c.o] Error 1
make[1]: *** [vio/CMakeFiles/vio.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [all] Error 2
// ==========================================================
I found the code in vio directory is a little different from the official version of mysql 5.6. When I compile the official version of mysql 5.6, there is no error about this. After a lot of trying, I guess the reason may be about openssl. Did someone meet the same problem? What can I do to solve the problem?
(ps: my gcc version is gcc-4.9.2)

failed to compile from source code

Hi,
I got an error while trying to compile from source code. And after commenting out the conflict codes, everything goes well ...

bellow is the error message:
./libmysqld.a(zutil.c.o):(.data.rel.ro.local+0x0): multiple definition of z_errmsg' ../libmysqld.a(zutil.c.o):(.data.rel.ro.local+0x0): first defined here ../libmysqld.a(adler32.c.o): In functionadler32_combine_':
/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/storage/innobase/zlib_embedded/adler32.c:141: multiple definition of adler32_combine' ../libmysqld.a(adler32.c.o):/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/zlib/adler32.c:138: first defined here ../libmysqld.a(crc32.c.o): In functioncrc32_combine':
/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/storage/innobase/zlib_embedded/crc32.c:432: multiple definition of `crc32_combine'
../libmysqld.a(crc32.c.o):/u01/project/offical/mysql-5.6-webscalesql-5.6.16-47/zlib/crc32.c:381: first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [libmysqld/examples/mysql_embedded] Error 1

I can not complie mysql-5.6-webscalesql-5.6.24.97

cmake version 2.8.12
I installed devtoolset-1.1
gcc version 4.7.2-5
glibc 2.12-1
OS Red hat Enterprise Linux Server release 6.7

cmake command
cmake -DCMAKE_INSTALL_PREFIX=/db/mysql-5.6-webscalesql-5.6.24.97
-DSYSCONFDIR=/etc
-DMYSQL_TCP_PORT=3306
-DDEFAULT_CHARSET=utf8
-DENABLED_LOCAL_INFILE=1
-DWITH_EXTRA_CHARSETS=all
-DDEFAULT_COLLATION=utf8_general_ci
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock
-DMYSQL_DATADIR=/data/mysql
-DWITH_SSL=system
-DENABLE_DOWNLOADS=1

cmake output

..
..
-- Performing Test HAVE_LLVM_LIBCPP - Failed
..
..
..
-- Check size of int8 - failed
-- Check size of int16 - failed
-- Check size of uint8 - failed
-- Check size of uint16 - failed
-- Check size of int32 - failed
-- Check size of uint32 - failed
-- Check size of int64 - failed
-- Check size of uint64 - failed
-- Check size of bool - failed
..
-- Performing Test TIME_T_UNSIGNED - Failed
..
-- Performing Test HAVE_TIMESPEC_TS_SEC - Failed
..
--Performing Test HAVE_SOLAROS_STYLE_GETHOST - Failed
..
..
--Configuring done
--Generating done
-- Build files have been written to: /db/mysql-5.6-webscalesql-5.6.24.97

crash in innobase_update_index_stats

On trunk/5.6.12 I keep hitting this crash when running random queries involving INDEX_STATISTICS.

(gdb) bt
#0  in pthread_kill () from /lib64/libpthread.so.0
#1  in handle_fatal_signal (sig=11) at ./trunk/sql/signal_handler.cc:248
#2  <signal handler called>
#3  in innobase_update_index_stats (table_stats=0x7f53740766a0) at ./trunk/storage/innobase/handler/ha_innodb.cc:4051
#4  in get_index_stats_handlerton at ./trunk/sql/handler.cc:858
#5  in plugin_foreach_with_mask at ./trunk/sql/sql_plugin.cc:2094
#6  in ha_get_index_stats at ./trunk/sql/handler.cc:866
#7  in fill_index_stats  at ./trunk/sql/table_stats.cc:681
#8  in do_fill_table at ./trunk/sql/sql_show.cc:7195
#9  in get_schema_tables_result at ./trunk/sql/sql_show.cc:7296
#10 in JOIN::prepare_result at ./trunk/sql/sql_select.cc:844
#11 in JOIN::exec at ./trunk/sql/sql_executor.cc:116
#12 in mysql_execute_select  at ./trunk/sql/sql_select.cc:1121
#13 in mysql_select at ./trunk/sql/sql_select.cc:1242
#14 in handle_select at ./trunk/sql/sql_select.cc:125
#15 in execute_sqlcom_select at ./trunk/sql/sql_parse.cc:5534
#16 in mysql_execute_command at ./trunk/sql/sql_parse.cc:2969
#17 in mysql_parse at ./trunk/sql/sql_parse.cc:6694
#18 in dispatch_command at ./trunk/sql/sql_parse.cc:1402
#19 in do_command at ./trunk/sql/sql_parse.cc:1047
#20 in do_handle_one_connection  at ./trunk/sql/sql_connect.cc:1001
#21 in handle_one_connection at ./trunk/sql/sql_connect.cc:917
#22 in pfs_spawn_thread at ./trunk/storage/perfschema/pfs.cc:1855
#23 in start_thread from /lib64/libpthread.so.0
#24 in clone from /lib64/libc.so.6
(gdb) p mysql_parse::thd->query_string
$1 = {
  string = {
    str = 0x7f5374006c40 "select  \t ROUTINE_BODY from\n\t`information_schema`.`INNODB_SYS_FOREIGN` as `INNODB_SYS_FOREIGN` \n\t right outer join `information_schema`.`INDEX_STATISTICS` as `INDEX_STATISTICS` \non 2\n\n \t\n\t natural left outer join `information_schema`.`ROUTINES` as `ROUTINES` \n \t\n\t inner join `test`.`t0004` as `t0004`  \non ( 1  )\n  \n \ngroup by \n\tROUTINES.DEFINER desc",
    length = 351
  },
  cs = 0x1309180 <my_charset_latin1>
}
(gdb)

I have many core files but no exact testcase yet. Rerunning queries didn't crash.
More info later.

I want to compile mysql for facebook without rocksdb

Hello,

I want to compile mysql for facebook without rocksdb.
What do you use when executing cmake with?
for example -DWITH_ROCKSDB_STORAGE_ENGINE=0

And,
I want to test document type for json data.
So, I think It is not problem compiling source without rocksdb.
Is it right?

Thx.

Problem in building on CentOS 6.4

I am trying to build https://github.com/facebook/mysql-5.6 on CentOS 6.4 and the build is failing(I am using devtools-2)

Scanning dependencies of target merge_large_tests-t
[ 82%] Building CXX object unittest/gunit/CMakeFiles/merge_large_tests-t.dir/merge_large_tests.cc.o
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogBasic_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:84:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogThresholdChange_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:122:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
In file included from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:13:0:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc: In member function 'virtual void log_throttle_unittest::LogThrottleTest_SlowLogSuppressCount_Test::TestBody()':
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/log_throttle-t.cc:154:78: error: invalid conversion from 'bool ()(THD, const char_, uint) {aka bool ()(THD, const char_, unsigned int)}' to 'bool ()(THD, const char_, uint, system_status_var_) {aka bool ()(THD, const char_, unsigned int, system_status_var_)}' [-fpermissive]
In file included from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/fake_table.h:19:0,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/mock_field_timestamp.h:19,
from /home/kartik/mysql-5.6-webscalesql-5.6.16-47/unittest/gunit/copy_info-t.cc:22,
from /home/kartik/build_facebook_mysql_5.6/unittest/gunit/merge_large_tests.cc:2:
/home/kartik/mysql-5.6-webscalesql-5.6.16-47/sql/sql_class.h:1469:3: error: initializing argument 4 of 'Slow_log_throttle::Slow_log_throttle(ulong_, mysql_mutex_t_, ulong, bool ()(THD, const char_, uint, system_status_var_), const char_)' [-fpermissive]
At global scope:
cc1plus: warning: unrecognized command line option "-Wno-null-dereference" [enabled by default]
make[2]: *_* [unittest/gunit/CMakeFiles/merge_large_tests-t.dir/merge_large_tests.cc.o] Error 1
make[1]: *** [unittest/gunit/CMakeFiles/merge_large_tests-t.dir/all] Error 2
make: *** [all] Error 2

Am I missing something? Please let me know if there is a mailing list.
I basically need a non-blocking mysqlclient(C bindings).

Thanks for your patience !

slowdown when using rtti switch

Issue by maykov
Tuesday Jul 28, 2015 at 22:03 GMT
Originally opened as MySQLOnRocksDB#94


We experienced 5% slowdown when started compiling withou -fno-rtti switch.

Oracle enabled rtti in 5.6.4: http://dev.mysql.com/worklog/task/?id=5825
commit: mysql/mysql-server@a5ee727

They haven't started using rtti until 5.7 though.
Excerpt:
<<<A separate test of removing the -fno-exceptions and the -fno-rtti flags
shows that there is no significant difference in execution time between
having and not having these flags.>>>

[CLOSED] change SHOW ENGINE ROCKSDB STATUS

Issue by jonahcohen
Wednesday Jan 07, 2015 at 22:57 GMT
Originally opened as MySQLOnRocksDB#4


From @mdcallag:

For background see http://dev.mysql.com/doc/refman/5.6/en/show-engine.html

We need to change what is in SHOW ENGINE ROCKSDB STATUS. Right now it has live sst files which is a huge list with leveled compaction. For now I prefer to have it list the output from compaction stats. That probably needs to use one of:
db->GetProperty("rocksdb.stats", ...
db->GetProperty("rocksdb.cfstats", ...

*************************** 1. row ***************************
Type: ROCKSDB
Name: live_files
Status: cf=default name=/4908814.sst size=97853952
cf=default name=/4908812.sst size=97879865
cf=default name=/4908807.sst size=97833748
cf=default name=/4905498.sst size=1865749
cf=default name=/4905500.sst size=2670668

MIN_TIMER_WAIT field is inconsistent for wait/synch/cond/rocksdb/cond_stop

Issue by maykov
Monday Jul 20, 2015 at 21:59 GMT
Originally opened as MySQLOnRocksDB#92


To repro this, you need to build and run with perf schema enabled.

truncate table performance_schema.events_waits_summary_by_instance; truncate table performance_schema.events_waits_summary_global_by_event_name; select * from performance_schema.events_waits_summary_global_by_event_name where event_name like '%rocksdb%';select * from performance_schema.events_waits_summary_by_instance where event_name like '%rocksdb%';

You will see all zeroes as it should be.

Repeat the select portion of above statements in a few seconds. Values for wait/synch/cond/rocksdb/cond_stop will be much higher. However, MIN_TIMER_WAIT field in the _by_event_name will stay at zero forever, while _by_instance table will reflect the actual minimum wait time.

This causes the perfschema.aggregate test to break.

Few more questions: Why values for cond_stop are so huge ? (eg 1105186494971650) Are they in CPU ticks?
Why do all other rocksdb counters stay at zero?

FbsonJsonParser and FbsonToJson ignore JSON string escape sequences

FbsonJsonParser does not decode JSON string escape sequences, with the exception of \ and ". The JSON standard specifies the following sequences that should be unescaped when parsing:

  • \"
  • \\
  • \/
  • \b
  • \f
  • \n
  • \r
  • \t
  • \uXXXX, where XXXX is a hexadecimal Unicode code point.

In addition, FbsonToJson does not escape the characters that must be escaped when serializing:

  • "
  • \
  • Control characters from U+0000 to U+001F.

Here is an example that does not parse correctly:

{"name": "Hello, \ud83c!"}

Release Row Locks when necessary, Transaction API variant

Issue by spetrunia
Tuesday Sep 01, 2015 at 23:02 GMT
Originally opened as MySQLOnRocksDB#105


MyRocks needs to release row locks in some cases:

  1. When a statement is aborted, all the locks it has taken should be released.
  2. SQL may call handler::unlock_row(). This should release the lock(s) that were taken for the last row that was read.

All locks are recursive. A lock may be acquired multiple times. When MyRocks wants to release locks taken by the statement, it only means un-doing the locking actions done by this statement. All the locking done before the last statement must remain.

Releasing locks for failed statement

(Current implementation in MyRocks was done in issue #57).

When a statement inside a transaction fails, MyRocks will make these calls:

   txn->SetSavePoint();  // Called when a statement starts

   ... statement actions like txn->Put(), txn->Delete(), txn->GetForUpdate()

   txn->RollbackToSavePoint();  // Called if the statement failed.

As far as I understood Antony @agiardullo 's suggestion, it was:

Make txn->RollbackToSavePoint() also undo all locking actions done since the last txn->SetSavePoint() call.

This will work.

Release of the last acquired lock

This is used to release the lock that was obtained when reading the last row. From MyRocks point of view, it is sufficient if this TransactionDBImpl 's function was exposed in TransactionDB class:

  void UnLock(TransactionImpl* txn, uint32_t cfh_id, const std::string& key);

MyRocks always knows which Column Family was used, and the key is saved in ha_rocksdb::last_rowkey.

However, current implementation in TransactionDBImpl::UnLock is not sufficient. As far as I understand it is not recursive: one can call TryLock() multiple times, and then a single UnLock() call will fully release the lock. MyRocks needs last UnLock() call to only undo the effect of the last TryLock() call.

make stat computations sane

Issue by maykov
Wednesday Aug 19, 2015 at 17:01 GMT
Originally opened as MySQLOnRocksDB#97


Right now, there is a delay of 1 hour between stat computes when using default options. This makes MyRocks fail (run slow) on standard tests such as Wisconsin. We need to fix this.

E.g.: i.e. updating stats on table X (including Memstore) if all of the indexes of the table are less than Y bytes, whenever the table was accessed.

SAVEPOINT support

Issue by yoshinorim
Friday Aug 14, 2015 at 15:03 GMT
Originally opened as MySQLOnRocksDB#96


SAVEPOINT is needed for https://bugs.mysql.com/bug.php?id=71017. For bug71017, SAVEPOINT for read only transaction is good enough. It is needed to define behavior if there is any updates.

  • Supporting real SAVEPOINT (rolling back to the savepoint if executing ROLLBACK TO SAVEPOINT).
  • Returning errors if there is any modification after or at executing SAVEPOINT.

MyRocks data size is greater than InnoDB

Issue by BohuTANG
Thursday Jun 04, 2015 at 11:20 GMT
Originally opened as MySQLOnRocksDB#80


From our benchmarks under the same datasets for MyRocks/InnoDB/TokuDB, data sizes are:

MyRocks: 43GB ( the ./rocksdb dir)
InnoDB:   33GB (without compress)
TokuDB:  15GB (compression is zlib, and compress-ratio is 2, so the raw abous 30GB)

All configuration of MyRocks is in defaults, the 'show engine rocksdb status' as follows:

mysql> show engine rocksdb status\G;
*************************** 1. row ***************************
  Type: DBSTATS
  Name: rocksdb
Status: 
** DB Stats **
Uptime(secs): 79985.6 total, 1704.4 interval
Cumulative writes: 54K writes, 280M keys, 54K batches, 1.0 writes per batch, ingest: 27.78 GB, 0.36 MB/s
Cumulative WAL: 54K writes, 54K syncs, 1.00 writes per sync, written: 27.78 GB, 0.36 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 batches, 0.0 writes per batch, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

*************************** 2. row ***************************
  Type: CF_COMPACTION
  Name: __system__
Status: 
** Compaction Stats [__system__] **
Level    Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt)  KeyIn KeyDrop
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      0/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      3.3         0       222    0.002          0       0      0
  L1      1/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.9     12.8      6.1         0        56    0.004          0    110K   110K
 Sum      1/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.9      4.2      4.2         1       278    0.002          0    110K   110K
 Int      0/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0         0         0    0.000          0       0      0
Flush(GB): cumulative 0.002, interval 0.000
Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard

*************************** 3. row ***************************
  Type: CF_COMPACTION
  Name: default
Status: 
** Compaction Stats [default] **
Level    Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt)  KeyIn KeyDrop
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      2/0          1   0.5      0.0     0.0      0.0      27.7     27.7       0.0   0.0      0.0     62.3       456      7841    0.058          0       0      0
  L1      8/0          8   0.8     30.8    27.8      3.0      30.7     27.7       0.0   1.1     41.8     41.7       753      3239    0.233          0    280M      0
  L2     68/0         99   1.0      0.0     0.0      0.0       0.0      0.0      27.7   0.0      0.0      0.0         0         0    0.000          0       0      0
  L3    543/0        998   1.0      0.0     0.0      0.0       0.0      0.0      27.7   0.0      0.0      0.0         0         0    0.000          0       0      0
  L4   5078/0       9998   1.0      4.7     3.2      1.5       4.5      3.1      24.5   1.4     30.1     29.3       158       786    0.202          0     57M   669K
  L5  15910/0      31843   0.3     45.9    24.3     21.5      45.3     23.7       3.3   1.9     19.4     19.1      2427      4600    0.528          0    252M  1024K
 Sum  21609/0      42947   0.0     81.3    55.3     26.0     108.2     82.2      83.1   3.9     21.9     29.2      3794     16466    0.230          0    589M  1693K
 Int      0/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0         0         0    0.000          0       0      0
Flush(GB): cumulative 27.749, interval 0.000
Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard

3 rows in set (0.00 sec)

and

mysql> select * from ROCKSDB_CF_OPTIONS where value like '%snappy%'\G;
*************************** 1. row ***************************
    CF_NAME: __system__
OPTION_TYPE: COMPRESSION_TYPE
      VALUE: kSnappyCompression
*************************** 2. row ***************************
    CF_NAME: default
OPTION_TYPE: COMPRESSION_TYPE
      VALUE: kSnappyCompression
2 rows in set (0.00 sec)

ERROR: 
No query specified

Defragmentation for partitioned InnoDB table

Hi Facebook.
Really thanks for your awesome features.

I really interested in defragmentation of InnoDB table.
But for the huge table, we can't split defragmentation job.
Yes for the index level But if PRIMARY KEY itself is huge, we have to run defragmentation whole day.

So, I think defragmentation job can be splitted by partition level if the table is partitioned.
But current facebook version of mysql doesn't support for the partitioned table.

I think it can be implement easily (Just think) and I want to try.
Before implementation, You can also implement it easily (you are expert than me at least ^^), but you didn't.
I thought there's enough reason you did't implement it. Please share the reason to me ?

Really Thanks.

Change locking system into using RocksDB's pessimistic transactions

Issue by spetrunia
Wednesday Jun 17, 2015 at 21:02 GMT
Originally opened as MySQLOnRocksDB#86


RocksDB's pessimistic transaction system handles locking and also takes care of storing not-yet-committed changes made by the transaction. That is, it has two counterparts in MyRocks:

  1. Row locking module (rdb_locks.h: class LockTable)
  2. Table of transaction's changes (rdb_rowmods.h: class Row_table).

If we just replace #.1, there will be data duplication (Row_table will have the same data as WriteBatchWithIndex).

Using the API to get SQL semantics

At start, we call
transaction->SetSnapshot()

this gives us:

+  // If SetSnapshot() is used, then any attempt to read/write a key in this
+  // transaction will fail if either another transaction has this key locked
+  // OR if this key has been written by someone else since the most recent
+  // time SetSnapshot() was called on this transaction.

then, reading, modifying and writing a key can be done with simple

rocksdb_trx->Get() 
... modify the record as needed
rocksdb_trx->Put()

Misc notes

  • Modifying the same row multiple times will work, it seems.

Open issues

  • Transaction API doesnt allow to create iterators. How does one create an Iterator that looks at a snapshot that agrees with the transaction? Pass wtite_options.snapshot into Transaction::BeginTransaction() ?
  • Statement rollback. If a statement within a transaction fails, it is rolled back. All its changes are undone, and its locks should be released. Transaction's changes/locks should remain. There seems to be no way to achieve this in the API?
  • SELECT ... LOCK IN SHARE MODE. There seems to be no way to achieve shared read locks in the API.

Intermittent failure in test rocksdb.rocksdb

Alexey suggested that I assign this issue to you. We're seeing occasional failures of this test in our CI testing.

rocksdb.rocksdb w9 [ fail ]
Test ended at 2015-09-16 21:23:21

CURRENT_TEST: rocksdb.rocksdb
--- /data/users/jenkins/workspace/github-mysql-nightly/BUILD_TYPE/ASan/CLIENT_MODE/Async/PAGE_SIZE/32/TEST_SET/MixedOtherBig/label/mysql/mysql/mysql-test/suite/rocksdb/r/rocksdb.result 2015-09-17 06:20:07.937508115 +0300
+++ /data/users/jenkins/workspace/github-mysql-nightly/BUILD_TYPE/ASan/CLIENT_MODE/Async/PAGE_SIZE/32/TEST_SET/MixedOtherBig/label/mysql/mysql/_build-5.6-ASan/mysql-test/var/9/log/rocksdb.reject 2015-09-17 07:23:20.339521073 +0300
@@ -1731,7 +1731,7 @@

The following must return true (before the fix, the difference was 70):

select if((@var2 - @var1) < 30, 1, @var2-@var1);
if((@var2 - @var1) < 30, 1, @var2-@var1)
-1
+93
drop table t0,t1;

Issue #33: SELECT ... FROM rocksdb_table ORDER BY primary_key uses sorting

mysqltest: Result content mismatch

DML statements over reverse-ordered CFs are very slow after #86.

Issue by spetrunia
Friday Sep 04, 2015 at 20:43 GMT
Originally opened as MySQLOnRocksDB#107


Finally figured out why some DELETE queries got very slow (about 100x slower) after fix for #86.

create table t4 (
  id int, value int, value2 varchar(200), 
  primary key (id) comment 'rev:cf_i3', 
  index(value) comment 'rev:cf_i3'
) engine=rocksdb;

Consider a query:

delete from t4 where id <= 3000;

EXPLAIN is:

+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type  | possible_keys | key     | key_len | ref   | rows | Extra       |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+
|  1 | SIMPLE      | t4    | range | PRIMARY       | PRIMARY | 4       | const |    1 | Using where |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------------+

MySQL will use the following algorithm

  h->index_init(PRIMARY);
  for (res=h->index_first(); res != EOF && table.id<3000 ; h->index_next())
  {
    h->delete_row(); // deletes the row we've just read
  }

The table uses reverse column families, so this translates into these RocksDB
calls:

  trx= db->BeginTransaction();
  iter= trx->NewIterator();
  iter->Seek(index_number);
  while ()
  {
    if (!iter->Valid() || key_value_in_table(iter->key()))
    {
      // No more rows in the range
      break;
    }

    rowkey= iter->key();
    trx->Delete(rowkey);  // (*)

    iter->Prev();  // (**)
  }

Note the lines () and (*).

include/rocksdb/utilities/transaction.h has this comment:

  // The returned iterator is only valid until Commit(), Rollback(), or
  // RollbackToSavePoint() is called.
  // NOTE: Transaction::Put/Merge/Delete will currently invalidate this iterator
  // until
  // the following issue is fixed:
  // https://github.com/facebook/rocksdb/issues/616
  virtual Iterator* GetIterator(const ReadOptions& read_options) = 0;

I assume it refers to this comment in issue #616:

  it is not safe to mutate the WriteBatchWithIndex while iterating through 
  the iterator generated by NewIteratorWithBase()

So I implemented 'class Stabilized_iterator', which wraps the iterator returned
by GetIterator(), but keeps itself valid across Put/Merge/Delete calls.

It does so by

  • remembering the key it is pointing at
  • calling backend_iter->Seek(last_key) if Put or Delete operation happens.

This works, but in the above scenario it is very slow.

Here's why. The table is in reverse-ordered CF, so it stores the data in this physical order:

   TABLE 
  row10K
  ...
  row03
  row02
  row01
  row00
  another-table-row

However, DELETE works in the logical order. First it deletes row00, then row01, etc. Eventually, Transaction's WriteBatchWithIndex has:

   kDeletedRecord  row03
   kDeletedRecord  row02
   kDeletedRecord  row01
   kDeletedRecord  row00

We read row04. We call "trx->Delete(row04)", and the WriteBatchWithIndex now is:

   kDeletedRecord  row04
   kDeletedRecord  row03
   kDeletedRecord  row02
   kDeletedRecord  row01
   kDeletedRecord  row00

Then, we call iter->Prev() (line (**)). Stabilized_iterator notes that its underlying iterator is invalidated. In orer to restore it, it calls

  backend_iter->Seek(row04).

This operation finds row04 in the table, but it also sees {kDeletedRecord, row04} in the WriteBatchWithIndex. It advances both of its underlying iterators, until it reaches another-table-row.

Then, Stabilized_iterator calls backend_iter->Prev(). In this call, the iterator walks back through the pairs of row00, row01, ... row04, until it finds row05 in the base table.

This works, but if one deletes N rows then it's O(N^2) operations.

Implement Read Free Replication

Issue by yoshinorim
Wednesday Jan 14, 2015 at 01:48 GMT
Originally opened as MySQLOnRocksDB#21


This is about implementing read-free slave, similar to what TokuDB has recently implemented.

The idea is that one can process RBR events without making Get() calls or scans. They have sufficient information to issue Put/Delete calls.

RocksDB SE will support RBR only so RBR restriction is not a problem.

support binlog + rocksdb group commit and XA

Issue by jonahcohen
Wednesday Jan 07, 2015 at 23:01 GMT
Originally opened as MySQLOnRocksDB#8


From @mdcallag:

This requires more discussion but many of us are in favor of it.

It might be time to use the binlog as the source of truth to avoid the complexity and inefficiency of keeping RocksDB and the binlog synchronized via internal XA. There are two modes for this. The first mode is durable in which case fsync is done after writing the binlog and RocksDB WAL. The other mode is non-durable in which case fsync might only be done once per second and we rely on lossless semisync to recover. Binlog as source of truth might have been discussed on a MariaDB mail list many years ago - https://lists.launchpad.net/maria-developers/msg01998.html

Some details are at http://yoshinorimatsunobu.blogspot.com/2014/04/semi-synchronous-replication-at-facebook.html

The new protocol will be:

  1. write binlog
  2. optionally sync binlog
  3. optionally wait for semisync ack
  4. commit rocksdb - this also persists the GTID within RocksDB for the most recent commit, this also makes changes from the transaction visible to others
  5. optionally sync rocksdb WAL

When lossless semisync is used we skip steps 2 and 4. When lossless semisync is not used we do step 2 and skip 3. Step 4 is optional. Recovery in this case is done by:

  1. query RocksDB to determine GTID of last commit it has
  2. extract/replay transactions from binlog >= GTID from previous step

When running in non durable mode, then on a crash one of the following is true where the relation describes which one has more commits:

  1. rocksdb > binlog
  2. binlog > rocksdb
  3. rocksdb == binlog
    If you know which state the server is in, then you can reach state 3. If in state 1 then append events to the binlog without running them on innodb. If in state 2 then replay events to innodb without recording to binlog. If in state 3 then do nothing. Both RocksDB and the binlog can tell us the last GTID they contain and we can compare that with the binlog archived via lossless semisync to determine the state.

failed to make facebook mysql-5.6

Hi:
I am trying to cmake . && make but unfortunately failed to make it.

error message:
/u01/project/mysql-5.6/sql/item_func.h:31: error: ‘std::isfinite’ has not been declared

Look forward to your reply;

thanks.

Needs more space efficient row checksums

Issue by yoshinorim
Monday Jul 27, 2015 at 14:49 GMT
Originally opened as MySQLOnRocksDB#93


Here are sizes of loading LinkBench data with 1.5B max ids.
Without row checksum: 570959236
With row checksum enabled: 705439184

23.6% space increase is too much. This was because current row checksum adds 9 bytes (1B + CRC32 key and CRC32 value) into value for each index entry.

How about doing some optimizations like below?

  • Adding new checksum format using CRC8 instead of CRC32. This makes per row size reduced from 9B to 3B.
  • Adding rocksdb_checksum_row_pct global variable to control how many percentage of rows to be checksummed. For example, by setting to 10, checksum is enabled for only 10% of rows. This reduces checksum space overhead by 90%. This approach may miss some checksum corruptions. But in practice, many more than 10 rows will be affected if there is any corruption bug, so sooner or later it is possible to detect corruption.

innochecksum fails on compressed tables with key_block_size=16

Running innochecksum on key_block_size=16 compressed table results in

InnoDB offline file checksum utility.
Table is uncompressed
Page size is 16384
Fail; page 0 invalid (fails log sequence number check)

This is a regression from the same patch in Facebook 5.1 branch and is caused by 5bd4269 assuming that table is compressed iff physical page size == logical page size.

I'll submit a PR shortly

I want to use document path.

I complied "webscalesql-5.6.24.97"
And I changed system variables

mysql> show variables like '%son%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| end_markers_in_json | ON |
| use_fbson_input_format | ON |
| use_fbson_output_format | ON |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show variables like '%document%';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| allow_document_type | ON |
+---------------------+-------+
1 row in set (0.00 sec)

then,

mysql> show create table tt;
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tt | CREATE TABLE tt (
id int(11) NOT NULL,
doc document NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY id_doc (id,doc.address.zipcode AS INT)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> insert into tt values (100, '{"name":"Tom","age":30,"married":false,"address":{"houseNumber":1001,"streetName":"main","zipcode":"98761","state":"CA"},"cars":["F150","Honda"],"memo":null}');
Query OK, 1 row affected (0.00 sec)

mysql> select id, doc.name from tt where doc.address.zipcode like '98761';
Empty set (0.00 sec)

mysql> select * from tt;
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | doc |
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 100 | {"name":"Tom","age":30,"married":false,"address":{"houseNumber":1001,"streetName":"main","zipcode":"98761","state":"CA"},"cars":["F150","Honda"],"memo":null} |
+-----+--------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql>

I can not find data using document path.

What should I do?

Online/inplace DDL operations

Issue by spetrunia
Tuesday Jun 30, 2015 at 17:57 GMT
Originally opened as MySQLOnRocksDB#87


This task is about adding support for online (and/or in-place) DDL operations for MyRocks.

What can be supported

  • Some DDL changes have no effect on how the data is stored and so can be trivially supported:
    ** Default value changes
    ** N in char(N)
    ** possibly something else
  • Some changes may require data format change
    ** Should we try support for adding/removing non-key fields?
  • Some changes allow for in-place operation:
    ** The most important are ADD INDEX/DROP INDEX

How to support it

SQL layer will make the following calls:

h->check_if_supported_inplace_alter() // = HA_ALTER_INPLACE...
h->prepare_inplace_alter_table()
...
h->commit_inplace_alter_table

The first is to inquire whether the storage engine supports in-place operation for the given ALTER TABLE command. The latter are to actually made the change.

Support index-only scans for collations other than _bin

Issue by spetrunia
Friday Jan 30, 2015 at 12:54 GMT
Originally opened as MySQLOnRocksDB#25


Currently, index-only scans are supported for

  • numeric columns
  • varchar columns with BINARY, latin1_bin, utf8_bin collation.

for other collations (eg. case-insensitive, _ci collations), index-only scans are not supported. The reason for this is that is not possible to restore the original column value mem-comparable key. For example, in latin_general_ci both 'foo', 'Foo', and 'FOO' have mem-comparable form 'FOO'.

A possible solution could work like this:

  1. In addition to value->mem_comparable_form function, develop value->(mem_comparable_form, restore_data) function. This is easy for some charsets.
  2. store restore_data in RocksDB's value part of the key-value pair. We already used to store information about VARCHAR field length there, but now the value part is unused.

See also:


Diffs:
https://reviews.facebook.net/D58269
https://reviews.facebook.net/D58503
https://reviews.facebook.net/D58875

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.