Giter Site home page Giter Site logo

Comments (12)

hermanlee avatar hermanlee commented on April 28, 2024

Comment by mdcallag
Thursday Jun 04, 2015 at 11:22 GMT


Excellent, can you also send me the values of any rocksdb config options set in my.cnf? Will take me a few hours to respond.

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by BohuTANG
Thursday Jun 04, 2015 at 11:52 GMT


There is no rocksdb configuration in my.cnf, so all is default.

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by mdcallag
Thursday Jun 04, 2015 at 13:28 GMT


Can you tell me what is in your RocksDB LOG file (name "LOG") for "Compression algorithms supported"? Mine shows:

2015/06/04-04:33:23.366528 7faff86d38c0 Compression algorithms supported:
2015/06/04-04:33:23.366530 7faff86d38c0 Snappy supported: 1
2015/06/04-04:33:23.366531 7faff86d38c0 Zlib supported: 1
2015/06/04-04:33:23.366540 7faff86d38c0 Bzip supported: 1
2015/06/04-04:33:23.366542 7faff86d38c0 LZ4 supported: 1

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by yoshinorim
Thursday Jun 04, 2015 at 16:25 GMT


Could you try the following my.cnf settings and share results?
This makes MyRocks use zlib level 2 compression for most levels (compression_per_level and compression_opts), and locates files more efficiently (level_compaction_dynamic_level_bytes). By default RocksDB block size is 4KB, and increasing to 16KB will reduce some space.

rocksdb_block_size=16384
rocksdb_max_total_wal_size=4096000000
rocksdb_block_cache_size=12G
rocksdb_default_cf_options=write_buffer_size=128m;target_file_size_base=32m;max_bytes_for_level_base=512m;level0_file_num_compaction_trigger=4;level0_slowdown_writes_trigger=10;level0_stop_writes_trigger=15;max_write_buffer_number=4;compression_per_level=kNoCompression:kNoCompression:kSnappyCompression:kZlibCompression:kZlibCompression:kZlibCompression:kZlibCompression;compression_opts=-14:2:0;block_based_table_factory={cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=0;};prefix_extractor=capped:20;level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=true

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by mdcallag
Thursday Jun 04, 2015 at 17:12 GMT


Yoshi - before trying to tune we need to confirm that compression was enabled during his RocksDB build. Then we can tune. MyRocks has a lousy RocksDB configuration and this issue can be kept open for that. Using my test server the defaults are:

Started an instance locally with default my.cnf:
Options.max_open_files: 5000
Options.max_background_compactions: 1
Options.max_background_flushes: 1
--> max_open_files should be larger, max_background_compactions and max_background_flushes should be >= 4 for many systems

Compression algorithms supported:
Snappy supported: 1
Zlib supported: 1
Bzip supported: 1
LZ4 supported: 1

cache_index_and_filter_blocks: 1
index_type: 0
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x7faff4c88078
block_cache_size: 8388608
block_cache_compressed: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
filter_policy: nullptr
format_version: 2
--> block_size should be larger, since many blocks will be compressed and 4kb compressed to much less than 4kb can waste IO when file system page size is 4k

Options.write_buffer_size: 4194304
Options.max_write_buffer_number: 2
Options.compression: Snappy
Options.num_levels: 7
--> should use a larger value for write_buffer_size, default is 4M, maybe 64M

Options.min_write_buffer_number_to_merge: 1
--> probably OK

Options.level0_file_num_compaction_trigger: 4
Options.level0_slowdown_writes_trigger: 20
Options.level0_stop_writes_trigger: 24
--> probably OK

Options.target_file_size_base: 2097152
Options.max_bytes_for_level_base: 10485760
--> ugh, maybe 32MB for target_file_size_base and 512MB for max_bytes_for_level_base. Default here means that sizeof(L0) is 10M

Options.level_compaction_dynamic_level_bytes: 0
--> we want this to be 1

Options.soft_rate_limit: 0.00
Options.hard_rate_limit: 0.00
--> want these to be set, maybe 2.5 for soft and 3.0 for hard

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by BohuTANG
Friday Jun 05, 2015 at 09:31 GMT


This issue due to: *.sst not properly cleaned on DROP DATABASE.
I clean the .rocksdb dir and re-install database and to the same benchmark with yoshinorim's configurations, it's OK now to me:
datasize: 19GB (snappy)

one 33MB sst dump:

$./sst_dump --show_properties --file=../../myrocks_mysql/data/.rocksdb/002120.sst
from [] to []
Process ../../myrocks_mysql/data/.rocksdb/002120.sst
Sst file format: block-based
Table Properties:
------------------------------
  # data blocks: 3840
  # entries: 310960
  raw key size: 4975360
  raw average key size: 16.000000
  raw value size: 57838560
  raw average value size: 186.000000
  data block size: 33559362
  index block size: 126490
  filter block size: 0
  (estimated) table size: 33685852
  filter policy name: rocksdb.BuiltinBloomFilter
  # deleted keys: 0

(4975360+57838560)/33559362 ~1.8X ratio

Another question: how to find the mapping between table and *.sst files?

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by mdcallag
Friday Jun 05, 2015 at 13:46 GMT


Space is eventually reclaimed as compaction runs. If one table accounts for
the majority of space then that will be easy to notice. Yoshi might know
whether there is a way to force manual compaction after running DROP TABLE.

There is also a way to use different column families for different tables
which can make this easier to manage and monitor. I don't know whether we
have documented that yet but I will share some details soon.

We are working, or have worked, on ways to gather per-table metrics when
many tables are in the same column family. It is a hard problem, but
required when most tables are in one column family. I wil ask internally
about the status of that.

On Fri, Jun 5, 2015 at 5:31 AM, BohuTANG [email protected] wrote:

This issue due to: *.sst not properly cleaned on DROP DATABASE.
I clean the .rocksdb dir and re-install database and to the same benchmark
with yoshinorim's configurations, it's OK now to me:
datasize: 19GB (snappy)

Another question: how to find the mapping between table and *.sst files?


Reply to this email directly or view it on GitHub
MySQLOnRocksDB#80 (comment)
.

Mark Callaghan
[email protected]

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by yoshinorim
Friday Jun 05, 2015 at 15:05 GMT


We have not started implementing mappings between table and *.sst files yet. I'll file another task to track this.

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by hermanlee
Friday Jun 05, 2015 at 16:00 GMT


We can also dump out some of the rocksdb configuration options out through information schema, rather than have to look through the rocksdb LOG file:

select * from information_schema.rocksdb_cf_options;

The db options are mostly available through:

show global variables like 'rocksdb%';

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by yoshinorim
Friday Jun 05, 2015 at 17:54 GMT


@BohuTANG : BTW you can get MyRocks each table size via usual MySQL commands (SHOW TABLE STATUS or SELECT FROM information_schema.tables). Use these commands and compare compression ratio between tables. MyRocks calculates statistics every 600 seconds, and can be configured via rocksdb_stats_dump_period_sec global variable. And note that SHOW TABLE STATUS / I_S do not include size in Memstore (we're working in progress to include size from Memstore, not only from *sst).

from mysql-5.6.

hermanlee avatar hermanlee commented on April 28, 2024

Comment by maykov
Friday Jun 05, 2015 at 19:16 GMT


BohuTANG, optimize table t1; will run manual compaction for the table. However, if you already dropped the table, I can't think of an easy way to trigger compaction. One thing which you can do is to stop mysql and then use the ldb tool to run compaction. If the space is important than the deletion speed, may be you can do truncate table t1;optimize table t1;drop table t1; after https://reviews.facebook.net/D39579 is pushed.

There is no correspondence between .sst files and tables or databases. The data is spread out among sst files in the order of insertion and then intermixed through compaction process.

Yoshi, I have this task: MySQLOnRocksDB#55 to expose what is stored in each sst files through the information schema.

from mysql-5.6.

yoshinorim avatar yoshinorim commented on April 28, 2024

We're aware of an issue that DROP TABLE does not claim space correctly. We're working on a fix and issue#60 is tracking the problem. Closing this issue and will update at #60.

from mysql-5.6.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.