topling / toplingdb Goto Github PK
View Code? Open in Web Editor NEWToplingDB is a cloud native LSM Key-Value Store with searchable compression algo and distributed compaction
License: GNU General Public License v2.0
ToplingDB is a cloud native LSM Key-Value Store with searchable compression algo and distributed compaction
License: GNU General Public License v2.0
Remove redundant field.
557b64f fix FileSystem wrapper classes
4923b82 RandomAccessFileReader ...: Add exchange(FSRandomAccessFile*)
018271e Fix RandomAccessFile delegation methods
set level0_file_num_compaction_trigger=-1
should disable intra L0 compaction
set level0_file_num_compaction_trigger=-1
triggers infinite write stop
set write_buffer_size=1G and target_file_size_base=1M, then write data to DB.
RocksDB will schedule intra L0 compaction in some cases, now ToplingDB regards level0_file_num_compaction_trigger=-1
for flag of Disable intra L0 compactions, but it triggers this infinite write stop.
MergingIterator is a performance critical class, one of the CPU hotpots are InternalComparator & UserComparator which are both virtual functions.
Since rocksdb Comparator has Name, this PR inline bytewise comparators code by checking comparator Name.
This PR is based on PR facebook/rocksdb#9035
9f20ffc merging_iterator.cc: fix FORCE_INLINE
02458c4 merging_iterator.cc: add override
6bb244c merging_iterator.cc: ignore forceinline fail
7314205 merging_iterator.cc: format code
b91733d MergingIterator inline bytewise comparator
Hi everyone~
Is there any roadmap for toplingdb now? I'm interested in toplingdb and want to see what I can do. thx~
Use D3.js to show the graph, just by static json/yaml definition.
Show updated sideplugin definition is more complex: #35
Now DBIter::Next
supports lazy load value, but DBIter::Prev
does not support lazy load for first visible is kValueType.
DBIter::Prev
needs to calling underlying iter->Prev
to get the iter pos of first visible kValueType, this needs to backup iter->value()
, thus can not realize lazy load.
ToplingZipTable can load the value by ValueID
, thus we can backup the ValueID
instead of value content
to realize lazy load. -- If zero copy is applicable, the lazy load is not needed.
VecAutoSortTable
need to compute max/min key for building SST.
SstFileWriter
also need max/min key for FileMetaData.
These two computation has redundancy, we remove the computation in SstFileWriter, and get the result after TableBuilder::Finish()
, in which VecAutoSortTable
will return the computed result.
1e4fbdc leipeng 2022-10-04 12:59:37 +0800 Add TableBuilder::GetBoundaryUserKey() for sst file writer
and relevant commits in topling-rocks:
For Key and Value, same column pos of different rows may use RLE compression, and can use Rank-Select to map RLE runs to compressed form.
This PR resolves facebook/rocksdb#10591:
DB Iterator is an heavy object and it should be reused if possible, thus when DB Iterator is reused the underlying SST's should also be reused, we have add the SST Iter cache to LevelIterator.
Failed unit test are all passed
With env CACHE_SST_FILE_ITER set to 1, ReadOptions::cache_sst_file_iter will be set to true by default, thus all unit test can be reused.
There are still 2 unit test failed because SST File Iterator is cached, I just skip the corresponding ASSERT when cache_sst_file_iter is set to 1.
d043644 cache_sst_file_iter: fix iter leak for cache_sst_file_iter = false
e86b8d7 cache_sst_file_iter: change relavent code being similar to upstream
e80fe1b cache_sst_file_iter: bugfix for pinned_iters_mgr_
96a20be LevelIterator: Add ReadOptions::cache_sst_file_iter
2 years ago, I had created PR facebook/rocksdb#7081 which failed on old MSVC thus was rejected by rocksdb.
Now rocksdb has upgraded to c++17, PR facebook/rocksdb#7081 will not fail in CI, we can continuously migrating existing enum/string conversion code by using this enum refelection.
Below is the brief introduction about enum reflection(copied from PR facebook/rocksdb#7081):
For example:
ROCKSDB_ENUM_PLAIN(CompactionStyle, char,
kCompactionStyleLevel = 0x0,
kCompactionStyleUniversal = 0x1,
kCompactionStyleFIFO = 0x2,
kCompactionStyleNone = 0x3 // comma(,) can not be present here
);
assert(enum_name(kCompactionStyleUniversal) == "kCompactionStyleUniversal");
assert(enum_name(CompactionStyle(100)).size() == 0);
CompactionStyle cs= kCompactionStyleLevel;
assert(enum_value("kCompactionStyleUniversal", &cs) && cs == kCompactionStyleUniversal);
assert(!enum_value("bad", &cs) && cs == kCompactionStyleUniversal); // cs is not changed
// plain old enum defined in a namespace(not in a class/struct)
ROCKSDB_ENUM_PLAIN(EnumType, IntRep, e1 = 1, e2 = 2);
// this generates:
enum EnumType : IntRep { e1 = 1, e2 = 2 };
// enum reflection supporting code ...
// ...
// the supporting code makes template function enum_name and
// enum_value works for this EnumType
// other three macros are similar with some difference:
// enum class defined in a namespace(not in a class/struct)
ROCKSDB_ENUM_CLASS(EnumType, IntRep, Enum values ...);
// plain old enum defined in a class/struct(not in a namespace)
ROCKSDB_ENUM_PLAIN_INCLASS(EnumType, IntRep, Enum values ...);
// enum class defined in a class/struct (not in a a namespace)
ROCKSDB_ENUM_CLASS_INCLASS(EnumType, IntRep, Enum values ...);
FindFileInRange
is defined in version_set.cc
, ForwardIterator
should use it to reduce code explosionTopling SSTs does not need to compare userkey for checking user key switch, because Topling SSTs are two dimentions, it knows userkey boundary naturally.
When I compiled my demo, I found that you imported port/likely.h
, and at the same time I also found that the dependency on port/likely.h has been removed from rocksdb. This dependency leads to the need to add another search path, such as issues facebook/rocksdb#2008. So is this behavior what we expected?
demo CMakeLists.txt:
......
find_path(ROCKSDB_INCLUDE_DIR rocksdb/db.h PATHS)
include_directories(${ROCKSDB_INCLUDE_DIR})
add_executable(toplingdb_forbid_l0_compact script.cc)
target_link_libraries(rocksdb_test rocksdb lz4 -lpthread -lz -lsnappy -lbz2 -lzstd -ldl)
After the introduction of rocksdb/db.h
, the compilation can be passed.
In file included from /workspaces/toplingdb/script/main.cc:1:
In file included from /usr/local/include/rocksdb/db.h:21:
In file included from /usr/local/include/rocksdb/listener.h:15:
In file included from /usr/local/include/rocksdb/advanced_options.h:13:
In file included from /usr/local/include/rocksdb/cache.h:30:
In file included from /usr/local/include/rocksdb/compression_type.h:9:
In file included from /usr/local/include/rocksdb/enum_reflection.h:4:
/usr/local/include/rocksdb/preproc.h:473:10: fatal error: 'port/likely.h' file not found
#include "port/likely.h"
^~~~~~~~~~~~~~~
1 error generated.
make[2]: *** [CMakeFiles/rocksdb_test.dir/build.make:76: CMakeFiles/rocksdb_test.dir/main.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/rocksdb_test.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
introduction of rocksdb/db.h
(complete a Put/Get logic) and cmake .. && make -j20
your project.
Virtual function call to comparator is very frequent thus is a hot spot
In most use cases, the default BytewiseComparator or ReverseBytewiseComparator is used
This PR provide the basic support for our later PRs for FindFileInRange and MergingIterator:
devirtualize such virtual functions calls for BytewiseComparator or ReverseBytewiseComparator
Add prefix cache to omit most memcmp and indirect memory access to key
Performance of FindFileInRange was improved 20x+, MergingIterator was improved 3x+.
see PR: facebook/rocksdb#10646 FindFileInRange devirtualization and prefix cache
PR MergingIterator depends on PR facebook/rocksdb#9035 thus we would create it later.
331715c leipeng 2022-09-19 19:26:36 +0800 Add Comparator::opt_cmp_type()
9327631 leipeng 2022-07-21 15:06:55 +0800 Merge branch 'sideplugin-7.04.0-415200d7' into sideplugin-7.06.0-a0c63083
27a169f leipeng 2022-06-20 21:40:41 +0800 IsBytewiseComparator: optimize add cmp type to Comparator
f033dac leipeng 2022-06-16 22:56:59 +0800 Add IsReverseBytewiseComparator()
5c62088 leipeng 2022-06-09 18:37:24 +0800 Merge branch 'sideplugin-7.01.0-a5e51305' into sideplugin-7.03.0-f85b31a2
b65e06f leipeng 2022-03-30 11:21:17 +0800 Merge branch 'sideplugin-6.28.0-677d2b4a' into sideplugin-7.01.0-a5e51305
de07870 leipeng 2021-12-31 14:35:01 +0800 Merge branch 'sideplugin-6.26.0-28bab0ef' into sideplugin-6.28.0-677d2b4a
8287e70 leipeng 2021-12-11 12:53:05 +0800 Move IsBytewiseComparator ... from topling-rocks to toplingdb repo
This PR speed up GetApproximateSizes
by ~15%, in flame graph, GetApproximateSizes
time percent is reduced from 7.01%
to 5.92%
.
# StartTrace
curl -d '{"file": "trace.txt", "filter": "kTraceFilterNone"}' "http://somehost:port/db/mydb?cmd=StartTrace"
# EndTrace
curl -d '{}' "http://somehost:port/db/mydb?cmd=EndTrace"
Start | End |
---|---|
StartTrace | EndTrace |
StartIOTrace | EndIOTrace |
StartBlockCacheTrace | EndBlockCacheTrace |
Here Omit L0 Flush
is: definitely reduce IO, memory and CPU. Vaule content
should not be stored in MemTable, instead storing value offset(of WAL Log) and size in MemTable. So WAL also need to be mmap'ed. The complexity is:
We have realized feature Convert MemTable to L0 SST
, this feature needs MemTableRep to implement a new method ConvertToSST
, now CSPPMemTab realized this feature by write data to file mmap
.
The issue is: to be reliable, write data to file mmap
does not reduce IO, it just spread the IO pressure evenly during the lifetime of MemTable.
In the best cases, we set CSPPMemTab.sync_sst_file=false, let the operating system to perform the sync appropriatly, thus when the file is deleted after L0->L1 compact while the corresponding page caches have not write back to devices, the
write back to devices
can be omited.
Current code use maxHeap_ as a unique_ptr, when using maxHeap_, minHeap_ is still a valid object while consuming memory.
This PR use C++ union for minHeap_ and maxHeap_, this reduces memory usage.
Since MergingIterator is a very low level component, all existing test passed can ensure correctness.
4ffd72b use union for minHeap_ and maxHeap_
4c277ab MergingIterator: rearrange fields to reduce paddings (#9024)
port/port_posix.cc:31:10: fatal error: terark/util/fast_getcpu.hpp: No such file or directory
#include <terark/util/fast_getcpu.hpp>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
MADV_COLD
flag for madvise seems not available in linux kernel version 5.4.0-120-generic. Which version is the appropriate kernel version? Or any other depedencies?Note: Please use Issues only for bug reports. For questions, discussions, feature requests, etc. post to dev group: https://groups.google.com/forum/#!forum/rocksdb or https://www.facebook.com/groups/rocksdb.dev
One of Rocksdb secondary instance can not get the correct data after 16 hours.
Expected behavior
startup one primary and 3 secondary instances on k8s. every instances can get the same result for the same query.
Actual behavior
after 16 hours, primary and 2 secondary instances got 1000 records for user. one instance got 500 records.
Steps to reproduce the behavior
startup one primary and 3 secondary instances on k8s.
secondary instances invoke tryCatchUpWithPrimary every 3 seconds.
everything wroks fine at this time, can get the items correctly on the 3 secondary instances
after one day, there is a secondary instance become abnormal, can not get the whole data comparing to other instances, no exception throwed by tryCatchUpWithPrimary. example: other 3 instance(primary and other 2 secondary) can get 1000 records, but this instance can only get 500 records. and can not get the newest item inserted by primary as well
first saw on Java RocksdbJni 8.8.1, then downgrade to 6.29.4, still the same.
env:
Java + Spring Boot
aws k8s/aws efs
Rocksdb version: 6.29.4
Java RocksdbJni: 6.29.4.1
have any suggestion for this issue?
这个issue在RocksDB提了下,在国人社区也提下,请问下有碰到类似这种情况么?
the output file meta may be used later.
We have a in house branch of rocksdb which added a compaction.output.file.raw.size histogram, we use meta.raw_key_size + meta.raw_value_size as the histogram value, then we saw the histogram is always zero, which was caused by this bug.
2e93acd CompactionJob::FinishCompactionOutputFile: sync FileMeta with TableProperties
Forward MultiGet to overload with single cf.
Relevant commit: fe59c6a
ThreadLocalPtr::Reset(ptr) should Unref oldptr on suitable conditions(nullptr != oldptr && ptr != oldptr).
b7885ca thread_local.cc: Reset: use exchange instead of load+store
81a3e8c thread_local.cc: Reset: clean old ptr when old is not (null or newptr)
ca27ceb thread_local.cc: Reset: clean old ptr when old is not null
Show the runtime dynamic objects relation graph.
On multi process, when a thread forking a child process while another thread has entered mutex lock in __tz_convert, the newly forked process will deadlock when calling to localtime_r which call to __tz_convert.
MergeContext using unnecessary indirect pointers, and unnecessary std::string objects.
This PR is just an optimization, no semantic changes.
Compaction needs CompactionFilter, which may use DB::Get
for metadata(such as pika/todis/kvrocks), in distributed compaction, compact_worker has no DB
object, thus can not support such compaction.
ToplingZipTable Builder use two-pass scanning, it save decompressed kv data into tmp files, in second pass scaning, it read data from tmp file, thus we can run first pass scaning in DB side(local compation), and run second pass scanning in compaction worker to compress data -- compressing consumes 80+%
CPU time for ToplingZipTable.
This PR resolves facebook/rocksdb#10487 & facebook/rocksdb#10536, user code needs to call Refresh() periodically.
b4352e4 arena_wrapped_db_iter.cc: ArenaWrappedDBIter::Refresh(snap): Update read_options_.snapshot
6bed77f Add Iterator::RefreshKeepSnapshot()
628f6ce Add Iterator::Refresh(snapshot)
8b353cf leipeng 2022-07-30 23:06:18 +0800 autovector.h: perf improve & exception-safe fix
58d069c leipeng 2022-06-25 13:31:18 +0800 autovector: optimize front() and back()
f08745c leipeng 2022-06-24 17:15:50 +0800 autovector.h: fix a typo destory -> destroy
cfc7f1a leipeng 2022-06-23 16:27:27 +0800 autovector: add missing std::move
a4ab12e leipeng 2022-06-23 16:23:50 +0800 autovector: optimize copy-cons & move-cons
8216629 leipeng 2022-06-22 22:26:16 +0800 autovector.h: pick fixes from pull request to rocksdb
58d43b3 leipeng 2022-06-22 21:20:56 +0800 MemTable::Get: mark as attribute flatten
505b5b2 leipeng 2022-06-22 21:10:51 +0800 autovector: performance improves
7ae3109 leipeng 2022-06-22 19:39:20 +0800 autovector.h: add cons with initial size
c0aad3c leipeng 2021-09-27 18:07:20 +0800 Add autovector::reserve()
In MyTopling, bulk load is relaxed to allow unsorted input, this is implemented by using Topling VecAutoSortTable
.
Using VecAutoSortTable
, we avoided MyRocks's MergeTree which is very slow.
1e4fbdc leipeng 2022-10-04 12:59:37 +0800 Add TableBuilder::GetBoundaryUserKey() for sst file writer
6a85877 leipeng 2022-10-04 11:59:03 +0800 sst_file_writer.cc: auto sort assert(internal_comparator.IsBytewise())
db3cb9e leipeng 2022-09-20 11:29:31 +0800 db_iter.cc,sst_file_writer.cc,write_batch_with_index_internal.cc: ROCKSDB_FLATTEN, final, UNLIKELY
4775f2b leipeng 2022-09-19 19:27:13 +0800 sst_file_writer.cc: for TOPLINGDB_WITH_TIMESTAMP
592f75b leipeng 2022-09-19 14:01:19 +0800 sst_file_writer: AddImpl: use alloca & SetInternalKey(char* buf, ...)
9327631 leipeng 2022-07-21 15:06:55 +0800 Merge branch 'sideplugin-7.04.0-415200d7' into sideplugin-7.06.0-a0c63083
e949439 leipeng 2022-06-19 17:16:15 +0800 Add fixed_value_len, details --
aa3e822 leipeng 2022-06-17 17:21:08 +0800 SstFileWriter: adapt AutoSort TableFactory - use EstimatedFileSize
92198bb leipeng 2022-06-17 17:17:15 +0800 SstFileWriter: adapt AutoSort TableFactory
4c3f449 rockeet 2016-09-18 13:30:43 +0800 Add TableBuilderOptions::level and relevant changes (#1335)
Which coroutine lib and io_uring runtime are you using now?
Would you like to try our lib?
https://developer.aliyun.com/article/1093864
This PR is based on facebook/rocksdb#10645.
If comparator is BytewiseComparator or ReverseBytewiseComparator:
devirtualize comparator: specialize the impl by direct call memcmp
add prefix cache: narrow the search range by prefix cache, then find by comparator
by using key prefix cache, FindFileInRange
gained 10x+ speed up.
MergingIterator
can also benefit from key prefix cache.
Relevant commit: 8011db2 merging_iterator.cc: add key_prefix cache
Is there a Python API available?
Branch : All
Os : ubuntu 20.04
The git pull repositories use https://
schema not ssh@
schema when I make
the Makefile.
In toplingdb Makefile, git use ssh@
schema to pull repositories. On a machine without rsa we would get error:
Please make sure you have the correct access rights
and the repository exists.
+ cd sideplugin
+ git clone [email protected]:topling/cspp-memtable
Cloning into 'cspp-memtable'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
+ cd sideplugin
+ git clone [email protected]:topling/cspp-wbwi
Cloning into 'cspp-wbwi'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
After resolving dependencies
cd toplingdb/
make -j 20
CSPPMemTable does not need to compare userkey for checking switch to different userkey, because CSPPMemTable is realized by 2 dimentions, it knows userkey boundary naturally.
relavant commits:
9980087 leipeng 2023-04-24 21:21:17 +0800 memtable.cc: SaveValue: omit load ucmp if possible - fix comment
0541a65 leipeng 2023-04-24 21:05:34 +0800 Add MemTableRep::NeedsUserKeyCompareInGet() and relavant changes
Now we have removed common prefix len in FixedLenKeyIndex
.
In some cases, there are many common bytes in the middle of keys, for example:
In MyTopling, secondary keys has primary key at the end, common prefix in primary keys are the middle bytes of secondary keys, these bytes can be removed.
CREATE TABLE a (
id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
ts DATETIME,
INDEX(ts)
);
Encoded key of secondary index ts
has form ts(DATETIME) id(BIGINT)
, in which there are many zero leading bytes in id
.
This feature request is picked from facebook/rocksdb#10487
Current Iterator::Refresh() does not support snapshot, we have no way to refresh an iterator to a specified snapshot, instead we create a new iterator, but creating new iterator is heavy.
Refresh iterator to a specified snapshot, we can avoid creating new iterator thus improving performance.
struct MultiGetColumnFamilyData
was defined in db_impl.h and function MultiCFSnapshot
has an param iter_deref_func
which can be optimized out.
This PR move struct MultiGetColumnFamilyData
to anonymous namespace in db_imp.cc and delete param iter_deref_func
, this change both improve performance and greatly simplify the code.
This PR also extract read_options.timestamp
out of the loop.
656481c MultiGet: simplify and improve MultiCFSnapshot
remove unneccessary check
5f63712 PointLockManager::UnLockKey: use move assign, because txn id maybe string in the future
0e5c327 PointLockManager::UnLockKey: use assign intead of swap
9d8106d PointLockManager::UnLockKey: use swap instead of check
As a cloud native DB, distributed compaction needs fee charge, this involves some code changes.
In branch sideplugin-7.09.0-2022-10-27-5fef34fd
, MergingIterator
is using upstream RocksDB's version because MergingIterator
was greately changed in 2022-09 for speed up DeleteRange, so it is hard to merge with ToplingDB's devirtualization and prefix cache.
Prior to devirtualization and prefix cache, ToplingDB use union
to storing minHeap
and maxHeap
, this is also an improvement to upstream RocksDB.
This issue is a task to accommodate ToplingDB's MergingIterator:
minHeap
and maxHeap
Now Transaction DB use BaseDeltaIterator
, but it needs compare base_iter key and delta_iter key on each Next()/Prev(), this waste CPU.
We can add delta iter to the (heap of) underlying MergingIterator of DBIter, thus improving performance -- delta iter is unlikely going to heap top.
This feature request is picked from facebook/rocksdb#10888
Now RocksDB has TableReader::ApproximateKeyAnchors for sampling key boundaries for sub compaction.
It is better to expose ApproximateKeyAnchors to DB for applications, such as:
In MyRocks, ddl operations such as create index, can using this function to partition the input data to processing with multi threads. (InnoDB has innodb_ddl_threads for this purpose)
I had filed an Feature Request for MyRocks about this feature: facebook/mysql-5.6#1245
echo 'Libs: -L${libdir} -Wl,-rpath -Wl,'$ORIGIN' -lrocksdb' >> rocksdb.pc [12/586]
echo 'Libs.private: -lterark-zbs-r -lterark-fsa-r -lterark-core-r ' >> rocksdb.pc
echo 'Cflags: -I${includedir} -march=haswell -isystem third-party/gtest-1.8.1/fused-src' >> rocksdb.pc
echo 'Requires: ' >> rocksdb.pc
install -d /root/git/topling/lib
install -d /root/git/topling/lib/pkgconfig
for header_dir in ` "include/rocksdb" -type d`; do \
install -d //usr/local/$header_dir; \
done
/usr/bin/bash: line 1: include/rocksdb: Is a directory
for header in ` "include/rocksdb" -type f -name *.h`; do \
install -C -m 644 $header //usr/local/$header; \
done
/usr/bin/bash: line 1: include/rocksdb: Is a directory
for header in ; do \
install -d //usr/local/include/rocksdb/`dirname $header`; \
install -C -m 644 $header //usr/local/include/rocksdb/$header; \
done
install -d //usr/local/include/topling
install -C -m 644 sideplugin/rockside/src/topling/json.h //usr/local/include/topling
install -C -m 644 sideplugin/rockside/src/topling/json_fwd.h //usr/local/include/topling
install -C -m 644 sideplugin/rockside/src/topling/builtin_table_factory.h //usr/local/include/topling
install -C -m 644 sideplugin/rockside/src/topling/side_plugin_repo.h //usr/local/include/topling
install -C -m 644 sideplugin/rockside/src/topling/side_plugin_factory.h //usr/local/include/topling
install -d //usr/local/include/terark
install -d //usr/local/include/terark/io
install -d //usr/local/include/terark/succinct
install -d //usr/local/include/terark/thread
install -d //usr/local/include/terark/util
install -d //usr/local/include/terark/fsa
install -d //usr/local/include/terark/fsa/ppi
install -d //usr/local/include/terark/zbs
install -C -m 644 sideplugin/topling-zip/src/terark/*.hpp //usr/local/include/terark
install -C -m 644 sideplugin/topling-zip/src/terark/io/*.hpp //usr/local/include/terark/io
install -C -m 644 sideplugin/topling-zip/src/terark/succinct/*.hpp //usr/local/include/terark/succinct
install -C -m 644 sideplugin/topling-zip/src/terark/thread/*.hpp //usr/local/include/terark/thread
install -C -m 644 sideplugin/topling-zip/src/terark/util/*.hpp //usr/local/include/terark/util
install -C -m 644 sideplugin/topling-zip/src/terark/fsa/*.hpp //usr/local/include/terark/fsa
install -C -m 644 sideplugin/topling-zip/src/terark/fsa/*.inl //usr/local/include/terark/fsa
install -C -m 644 sideplugin/topling-zip/src/terark/fsa/ppi/*.hpp //usr/local/include/terark/fsa/ppi
install -C -m 644 sideplugin/topling-zip/src/terark/zbs/*.hpp //usr/local/include/terark/zbs
cp -ar sideplugin/topling-zip/boost-include/boost //usr/local/include
Linking ... build/Linux-x86_64-g++-11.3-bmi2-1/rls/dcompact_worker.exe
g++ -Wl,-unresolved-symbols=ignore-in-shared-libs -o build/Linux-x86_64-g++-11.3-bmi2-1/rls/dcompact_worker.exe build/Linux-x86_64-g++-11.3-bmi2-1/rls/dcompact_worker.o -L../../../.. -lrock
sdb -L../../../topling-zip/build/Linux-x86_64-g++-11.3-bmi2-1/lib_shared -lterark-{zbs,fsa,core}-g++-11.3-r -lrt -lpthread
/usr/bin/ld: cannot find -lrocksdb: No such file or directory
/usr/bin/ld: cannot find -lterark-zbs-g++-11.3-r: No such file or directory
/usr/bin/ld: cannot find -lterark-fsa-g++-11.3-r: No such file or directory
/usr/bin/ld: cannot find -lterark-core-g++-11.3-r: No such file or directory
collect2: error: ld returned 1 exit status
make[1]: *** [exe-common.mk:325: build/Linux-x86_64-g++-11.3-bmi2-1/rls/dcompact_worker.exe] Error 1
make[1]: Leaving directory '/root/git/topling/toplingdb/sideplugin/topling-dcompact/tools/dcompact'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.