Giter Site home page Giter Site logo

yugabyte / yugabyte-db Goto Github PK

View Code? Open in Web Editor NEW
8.5K 249.0 1.0K 705.7 MB

YugabyteDB - the cloud native distributed SQL database for mission-critical applications.

Home Page: https://www.yugabyte.com

License: Other

Python 2.25% CMake 0.31% Shell 0.82% PHP 0.01% Perl 0.82% C++ 33.10% Java 19.09% C 36.29% Yacc 0.81% Ruby 0.28% CSS 0.01% HTML 0.04% JavaScript 1.69% Emacs Lisp 0.01% Makefile 0.33% M4 0.09% PLpgSQL 3.81% Lex 0.22% XSLT 0.05% Assembly 0.01%
distributed-database database cpp high-performance cloud-native scale-out sql multi-region multi-cloud kubernetes distributed-sql distributed-sql-database

yugabyte-db's Introduction

YugabyteDB


License Documentation Status Ask in forum Slack chat Analytics

What is YugabyteDB?

YugabyteDB is a high-performance, cloud-native, distributed SQL database that aims to support all PostgreSQL features. It is best suited for cloud-native OLTP (i.e., real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

Core Features

  • Powerful RDBMS capabilities Yugabyte SQL (YSQL for short) reuses the query layer of PostgreSQL (similar to Amazon Aurora PostgreSQL), thereby supporting most of its features (datatypes, queries, expressions, operators and functions, stored procedures, triggers, extensions, etc). Here is a detailed list of features currently supported by YSQL.

  • Distributed transactions The transaction design is based on the Google Spanner architecture. Strong consistency of writes is achieved by using Raft consensus for replication and cluster-wide distributed ACID transactions using hybrid logical clocks. Snapshot, serializable and read committed isolation levels are supported. Reads (queries) have strong consistency by default, but can be tuned dynamically to read from followers and read-replicas.

  • Continuous availability YugabyteDB is extremely resilient to common outages with native failover and repair. YugabyteDB can be configured to tolerate disk, node, zone, region, and cloud failures automatically. For a typical deployment where a YugabyteDB cluster is deployed in one region across multiple zones on a public cloud, the RPO is 0 (meaning no data is lost on failure) and the RTO is 3 seconds (meaning the data being served by the failed node is available in 3 seconds).

  • Horizontal scalability Scaling a YugabyteDB cluster to achieve more IOPS or data storage is as simple as adding nodes to the cluster.

  • Geo-distributed, multi-cloud YugabyteDB can be deployed in public clouds and natively inside Kubernetes. It supports deployments that span three or more fault domains, such as multi-zone, multi-region, and multi-cloud deployments. It also supports xCluster asynchronous replication with unidirectional master-slave and bidirectional multi-master configurations that can be leveraged in two-region deployments. To serve (stale) data with low latencies, read replicas are also a supported feature.

  • Multi API design The query layer of YugabyteDB is built to be extensible. Currently, YugabyteDB supports two distributed SQL APIs: Yugabyte SQL (YSQL), a fully relational API that re-uses query layer of PostgreSQL, and Yugabyte Cloud QL (YCQL), a semi-relational SQL-like API with documents/indexing support with Apache Cassandra QL roots.

  • 100% open source YugabyteDB is fully open-source under the Apache 2.0 license. The open-source version has powerful enterprise features such as distributed backups, encryption of data-at-rest, in-flight TLS encryption, change data capture, read replicas, and more.

Read more about YugabyteDB in our FAQ.

Get Started

Cannot find what you are looking for? Have a question? Please post your questions or comments on our Community Slack or Forum.

Build Apps

YugabyteDB supports many languages and client drivers, including Java, Go, NodeJS, Python, and more. For a complete list, including examples, see Drivers and ORMs.

What's being worked on?

This section was last updated in May, 2023.

Current roadmap

Here is a list of some of the key features being worked on for the upcoming releases (the YugabyteDB v2.17 preview release has been released in Jan, 2023, and the v2.16 stable release was released in Jan 2023).

Feature Status Release Target Progress Comments
Automatic tablet splitting enabled by default PROGRESS v2.18 Track Enables changing the number of tablets (which are splits of data) at runtime.
Upgrade to PostgreSQL v15 PROGRESS v2.21 Track For latest features, new PostgreSQL extensions, performance, and community fixes
Database live migration using YugabyteDB Voyager PROGRESS Track Database live migration using YugabyteDB Voyager
Support wait-on-conflict concurrency control PROGRESS v2.19 Track Support wait-on-conflict concurrency control
Support for transactions in async xCluster replication PROGRESS v2.19 Track Apply transactions atomically on target cluster.
YSQL-table statistics and cost based optimizer(CBO) PROGRESS v2.21 Track Improve YSQL query performance
YSQL-Feature support - ALTER TABLE PROGRESS v2.21 Track Support for various ALTER TABLE variants
Support for GiST indexes PLANNING Track Support for GiST (Generalized Search Tree) based index
Connection Management PROGRESS Track Server side connection management

Recently released features

Feature Status Release Target Docs / Enhancements Comments
Faster Bulk-Data Loading in YugabyteDB DONE v2.15 Track Faster Bulk-Data Loading in YugabyteDB
Change Data Capture DONE v2.13 Change data capture (CDC) allows multiple downstream apps and services to consume the continuous and never-ending stream(s) of changes to Yugabyte databases
Support for materalized views DONE v2.13 Docs A materialized view is a pre-computed data set derived from a query specification and stored for later use
Geo-partitioning support for the transaction status table DONE v2.13 Docs Instead of central remote transaction execution metatda, it is now optimized for access from different regions. Since the transaction metadata is also geo partitioned, it eliminates the need for round-trip to remote regions to update transaction statuses.
Transparently restart transactions DONE v2.13 Decrease the incidence of transaction restart errors seen in various scenarios
Row-level geo-partitioning DONE v2.13 Docs Row-level geo-partitioning allows fine-grained control over pinning data in a user table (at a per-row level) to geographic locations, thereby allowing the data residency to be managed at the table-row level.
YSQL-Support GIN indexes DONE v2.11 Docs Support for generalized inverted indexes for container data types like jsonb, tsvector, and array
YSQL-Collation Support DONE v2.11 Docs Allows specifying the sort order and character classification behavior of data per-column, or even per-operation according to language and country-specific rules
YSQL-Savepoint Support DONE v2.11 Docs Useful for implementing complex error recovery in multi-statement transaction
xCluster replication management through Platform DONE v2.11 Docs
Spring Data YugabyteDB module DONE v2.9 Track Bridges the gap for learning the distributed SQL concepts with familiarity and ease of Spring Data APIs
Support Liquibase, Flyway, ORM schema migrations DONE v2.9 Docs
Support ALTER TABLE add primary key DONE v2.9 Track
YCQL-LDAP Support DONE v2.8 Docs support LDAP authentication in YCQL API
Platform Alerting and Notification DONE v2.8 Docs To get notified in real time about database alerts, user defined alert policies notify you when a performance metric rises above or falls below a threshold you set.
Platform API DONE v2.8 Docs Securely Deploy YugabyteDB Clusters Using Infrastructure-as-Code

Architecture

YugabyteDB Architecture

Review detailed architecture in our Docs.

Need Help?

Contribute

As an open-source project with a strong focus on the user community, we welcome contributions as GitHub pull requests. See our Contributor Guides to get going. Discussions and RFCs for features happen on the design discussions section of our Forum.

License

Source code in this repository is variously licensed under the Apache License 2.0 and the Polyform Free Trial License 1.0.0. A copy of each license can be found in the licenses directory.

The build produces two sets of binaries:

  • The entire database with all its features (including the enterprise ones) are licensed under the Apache License 2.0
  • The binaries that contain -managed in the artifact and help run a managed service are licensed under the Polyform Free Trial License 1.0.0.

By default, the build options generate only the Apache License 2.0 binaries.

Read More

yugabyte-db's People

Contributors

aishwarya24 avatar amitanandaiyer avatar anmalysh-yb avatar bbaddepudi avatar bmatican avatar d-uspenskiy avatar ddhodge avatar ddorian avatar haikarthikssk avatar hari90 avatar hectorgcr avatar isignal avatar jaki avatar jethro-m avatar lizayugabyte avatar mbautin avatar mchiddy avatar nkhogen avatar olegloginov avatar rajmaddy89 avatar ramkumarvs avatar rkarthik007 avatar robertpang avatar spolitov avatar stevebang avatar tedyu avatar ttyusupov avatar vars-07 avatar vipul-yb avatar wesleyw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yugabyte-db's Issues

Replication factor defaults to 3

Trying to create a keyspace with replication factor=1 seems to change the value to 3:
cqlsh> CREATE KEYSPACE myapp WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = false;

Then Describe gives:
cqlsh> describe myapp; CREATE KEYSPACE myapp WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true;

Other CQL commands also seem to default to 3, e.g. on a cluster with rf=1, creating a table gives the error:
Not enough live tablet servers to create table with the requested replication factor 3. 1 tablet servers are alive.

program terminates with sig 6 in yb::tablet::MvccManager::DoGetSafeTime

"$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py".

warning: Unable to find libthread_db matching inferior's thread library, thread debugging will not be available.
Core was generated by `/home/yugabyte/tserver/bin/../lib/unwrapped/yb-tserver --flagfile /home/yugabyt'.
Program terminated with signal 6, Aborted.
#0  0x00007fefd273a198 in raise () from /home/yugabyte/tserver/bin/../lib/linuxbrew/libc.so.6
(gdb) bt
#0  0x00007fefd273a198 in raise () from /home/yugabyte/tserver/bin/../lib/linuxbrew/libc.so.6
#1  0x00007fefd273b5ea in abort () from /home/yugabyte/tserver/bin/../lib/linuxbrew/libc.so.6
#2  0x00007fefd67b1afd in yb::(anonymous namespace)::DumpStackTraceAndExit () at ../../../../../src/yb/util/logging.cc:159
#3  0x00007fefd577e04d in google::LogMessage::Fail () at src/logging.cc:1478
#4  0x00007fefd577ff7d in google::LogMessage::SendToLog (this=<optimized out>) at src/logging.cc:1432
#5  0x00007fefd577dbaa in google::LogMessage::Flush (this=this@entry=0x7feac7b974e0) at src/logging.cc:1301
#6  0x00007fefd5780a3f in google::LogMessageFatal::~LogMessageFatal (this=0x7feac7b974e0, __in_chrg=<optimized out>) at src/logging.cc:2013
#7  0x00007fefdc3b4c3c in yb::tablet::MvccManager::DoGetSafeTime (this=this@entry=0xb7f2980, min_allowed=..., deadline=..., max_allowed=...,
    lock=lock@entry=0x7feac7b975a0) at ../../../../../src/yb/tablet/mvcc.cc:196
#8  0x00007fefdc3b5c7f in yb::tablet::MvccManager::UpdatePropagatedSafeTime (this=0xb7f2980, max_allowed=...)
    at ../../../../../src/yb/tablet/mvcc.cc:130
#9  0x00007fefdc37d570 in operator() (__closure=0x56032230) at ../../../../../src/yb/tablet/tablet_peer.cc:236
#10 std::_Function_handler<void(), yb::tablet::TabletPeer::InitTabletPeer(const std::shared_ptr<yb::tablet::enterprise::Tablet>&, const std::shared_future<std::shared_ptr<yb::client::YBClient> >&, const scoped_refptr<yb::server::Clock>&, const std::shared_ptr<yb::rpc::Messenger>&, const scoped_refptr<yb::log::Log>&, const scoped_refptr<yb::MetricEntity>&, yb::ThreadPool*, yb::ThreadPool*)::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...) at /home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/functional:1871
#11 0x00007fefdae4700d in operator() (this=0x56032230) at /home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/functional:2271
#12 yb::consensus::RaftConsensus::UpdateMajorityReplicated (this=0x56032000, majority_replicated_data=..., committed_index=0x7feac7b977c0)
    at ../../../../../src/yb/consensus/raft_consensus.cc:1006
#13 0x00007fefdae29c3e in yb::consensus::PeerMessageQueue::NotifyObserversOfMajorityReplOpChangeTask (this=0x108e4900,
    majority_replicated_data=...) at ../../../../../src/yb/consensus/consensus_queue.cc:1017
#14 0x00007fefd6818334 in operator() (this=<optimized out>)
    at /home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/functional:2271
#15 Run (this=<optimized out>) at ../../../../../src/yb/util/threadpool.cc:68
#16 yb::ThreadPool::DispatchThread (this=0x1fc4400, permanent=false) at ../../../../../src/yb/util/threadpool.cc:615
#17 0x00007fefd6814b46 in operator() (this=0x2ddfeb88) at /home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/functional:2271
#18 yb::Thread::SuperviseThread (arg=<optimized out>) at ../../../../../src/yb/util/thread.cc:602
#19 0x00007fefd2aad314 in start_thread () from /home/yugabyte/tserver/bin/../lib/linuxbrew/libpthread.so.0
#20 0x00007fefd27eebed in clone () from /home/yugabyte/tserver/bin/../lib/linuxbrew/libc.so.6
(gdb)
```

Preferred client ?

Seems like you guys will support redis + cassandra + postgresql.
While this is nice, do you have in mind a library to handle all features of the db ?(not talking about a separate one).

Simplest case:
While pg-client can support ~all features (since you throw everything in a query), it can't (automatically) remove the 1-hop that cql-client does by keeping a hash-token-map on the client and thus talking to the primary/secondary directly.

Makes sense ?

encountered an intermittent signal6 during rolling restart

Core was generated by `/home/yugabyte/tserver/bin/../lib/unwrapped/yb-tserver --flagfile /home/yugabyt'.
Program terminated with signal 6, Aborted.
#0  0x00007efc461f3067 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
54      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) where
#0  0x00007efc461f3067 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1  0x00007efc461f446a in __GI_abort () at abort.c:89
#2  0x00007efc461ec1b6 in __assert_fail_base (fmt=0x7efc46325118 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
    assertion=assertion@entry=0x7efc4b7b5030 "(\"libev: internal timer heap corruption\", ANHE_w (timers [active]) == (WT)w)", file=file@entry=0x7efc4b7b48e6 "ev.c", line=line@entry=3776,
    function=function@entry=0x7efc4b7b5390 <__PRETTY_FUNCTION__.6920> "ev_timer_stop") at assert.c:92
#3  0x00007efc461ec262 in __GI___assert_fail (assertion=0x7efc4b7b5030 "(\"libev: internal timer heap corruption\", ANHE_w (timers [active]) == (WT)w)", file=0x7efc4b7b48e6 "ev.c", line=3776,
    function=0x7efc4b7b5390 <__PRETTY_FUNCTION__.6920> "ev_timer_stop") at assert.c:101
#4  0x00007efc4b7b0e86 in ev_timer_stop (loop=0x21ba400, w=0x2cdb6d520) at ev.c:3776
#5  0x00007efc4bc24f3a in stop (this=0x2cdb6d520) at /n/jenkins/thirdparty/yugabyte-thirdparty-2018-01-09T08_37_15/thirdparty/installed/common/include/ev++.h:638
#6  ~timer (this=0x2cdb6d520, __in_chrg=<optimized out>) at /n/jenkins/thirdparty/yugabyte-thirdparty-2018-01-09T08_37_15/thirdparty/installed/common/include/ev++.h:638
#7  yb::rpc::Connection::~Connection (this=0x2cdb6d410, __in_chrg=<optimized out>) at ../../../../../src/yb/rpc/connection.cc:89
#8  0x00007efc4bc2ad62 in _M_release (this=0x2cdb6d400) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:150
#9  ~__shared_count (this=0x427c6080, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:659
#10 ~__shared_ptr (this=0x427c6078, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:925
#11 ~shared_ptr (this=0x427c6078, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr.h:93
#12 yb::rpc::InboundCall::~InboundCall (this=0x427c6010, __in_chrg=<optimized out>) at ../../../../../src/yb/rpc/inbound_call.cc:90
#13 0x00007efc51eec3e6 in ~QueueableInboundCall (this=0x427c6010, __in_chrg=<optimized out>) at ../../../../../../../src/yb/rpc/rpc_with_queue.h:29
#14 yb::redisserver::RedisInboundCall::~RedisInboundCall (this=0x427c6010, __in_chrg=<optimized out>) at ../../../../../../../src/yb/yql/redis/redisserver/redis_rpc.cc:130
#15 0x00000000004113a6 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x427c6000)
    at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:150
#16 0x00007efc51efe523 in ~__shared_count (this=0x138a3588, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:659
#17 ~__shared_ptr (this=0x138a3580, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr_base.h:925
#18 ~shared_ptr (this=0x138a3580, __in_chrg=<optimized out>) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/bits/shared_ptr.h:93
#19 ~BatchContext (this=0x138a3560, __in_chrg=<optimized out>) at ../../../../../../../src/yb/yql/redis/redisserver/redis_service.cc:531
#20 DeleteInternal (x=0x138a3560) at ../../../../../../../src/yb/gutil/ref_counted.h:166
#21 Destruct (x=0x138a3560) at ../../../../../../../src/yb/gutil/ref_counted.h:129
#22 yb::RefCountedThreadSafe<yb::redisserver::(anonymous namespace)::BatchContext, yb::DefaultRefCountedThreadSafeTraits<yb::redisserver::(anonymous namespace)::BatchContext> >::Release (
    this=this@entry=0x138a3560) at ../../../../../../../src/yb/gutil/ref_counted.h:157
#23 0x00007efc51f03964 in ~scoped_refptr (this=<synthetic pointer>, __in_chrg=<optimized out>) at ../../../../../../../src/yb/gutil/ref_counted.h:280
#24 operator() (status=..., this=<optimized out>) at ../../../../../../../src/yb/yql/redis/redisserver/redis_service.cc:379
#25 boost::detail::function::void_function_obj_invoker1<yb::redisserver::(anonymous namespace)::Block::BlockCallback, void, yb::Status const&>::invoke (function_obj_ptr=..., a0=...)
    at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/include/boost/function/function_template.hpp:159
#26 0x00007efc4f903796 in operator() (a0=..., this=0x33094d638) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/include/boost/function/function_template.hpp:760
#27 yb::client::internal::Batcher::CheckForFinishedFlush (this=0x33094d600) at ../../../../../src/yb/client/batcher.cc:192
#28 0x00007efc4f8fa642 in yb::client::internal::AsyncRpc::Finished (this=0xd054790, status=...) at ../../../../../src/yb/client/async_rpc.cc:136
#29 0x00007efc4bc376bf in operator() (this=0x2f541b910) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:2267
#30 yb::rpc::OutboundCall::CallCallback (this=this@entry=0x2f541b890) at ../../../../../src/yb/rpc/outbound_call.cc:327
#31 0x00007efc4bc386b5 in yb::rpc::OutboundCall::SetResponse(yb::rpc::CallResponse&&) (this=this@entry=0x2f541b890,
    resp=resp@entry=<unknown type in /home/yugabyte/tserver/bin/../lib/yb/libyrpc.so, CU 0x2aaf2f, DIE 0x349f1d>) at ../../../../../src/yb/rpc/outbound_call.cc:366
#32 0x00007efc4bc2788c in yb::rpc::Connection::HandleCallResponse (this=0x5bc26a10, call_data=...) at ../../../../../src/yb/rpc/connection.cc:409
#33 0x00007efc4bc6b50e in yb::rpc::YBConnectionContext::HandleCall (this=this@entry=0x56b3e870, connection=..., call_data=...) at ../../../../../src/yb/rpc/yb_rpc.cc:166
#34 0x00007efc4bc6b83e in yb::rpc::YBConnectionContext::ProcessCalls (this=0x56b3e870, connection=..., slice=..., consumed=0x7efc0e8b06b0) at ../../../../../src/yb/rpc/yb_rpc.cc:142
#35 0x00007efc4bc2261c in yb::rpc::Connection::TryProcessCalls (this=this@entry=0x5bc26a10) at ../../../../../src/yb/rpc/connection.cc:378
#36 0x00007efc4bc22906 in yb::rpc::Connection::ReadHandler (this=this@entry=0x5bc26a10) at ../../../../../src/yb/rpc/connection.cc:330
#37 0x00007efc4bc23b6d in yb::rpc::Connection::Handler (this=0x5bc26a10, watcher=..., revents=1) at ../../../../../src/yb/rpc/connection.cc:286
#38 0x00007efc4b7af717 in ev_invoke_pending (loop=0x371ff80) at ev.c:3155
#39 0x00007efc4b7b061e in ev_run (loop=0x371ff80, flags=0) at ev.c:3555
#40 0x00007efc4bc449a9 in run (flags=0, this=0x21eb4e0) at /n/jenkins/thirdparty/yugabyte-thirdparty-2018-01-09T08_37_15/thirdparty/installed/common/include/ev++.h:211
#41 yb::rpc::Reactor::RunThread (this=0x21eb4a0) at ../../../../../src/yb/rpc/reactor.cc:424
#42 0x00007efc4a3005c6 in operator() (this=0x2e341c8) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:2267
#43 yb::Thread::SuperviseThread (arg=<optimized out>) at ../../../../../src/yb/util/thread.cc:602
#44 0x00007efc46564694 in start_thread (arg=0x7efc0e8b1700) at pthread_create.c:333
#45 0x00007efc462a63cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Unable to connect to docker cluster in tutorial

As per the documentation it says I can use localhost:7000 and localhost:9000 to connect to the admin UI, this doesn't seem to work. From the startup commands I do not see where any ports are "exposed"

docker run --name yb-master-n1 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n1:7100
Adding node yb-master-n1
docker run --name yb-master-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n2:7100
Adding node yb-master-n2
docker run --name yb-master-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n3:7100
Adding node yb-master-n3
docker run --name yb-tserver-n1 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n1:9100
Adding node yb-tserver-n1
docker run --name yb-tserver-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n2:9100
Adding node yb-tserver-n2
docker run --name yb-tserver-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n3:9100
Adding node yb-tserver-n3

In addition I'm trying to start a C* application in docker connected to the same network and I'm not able to connect to any of the nodes. Do you by chance have the cassandra ports only listening on localhost?

Observing high contention in yb::server::HybridClock::Now()

Grabbed the following from http://{hostname}:9000/pprof/contention
In the same workload as #47, seeing high number of cpu cycles spent in HybridClock::Now()
Looks like there might be a software bottleneck that could be optimized.

-----------

3515904	323 @ 00007f57a6271e60 00007f57a984612d 00007f57abe24671 00007f57abe25df5 00007f57abdc8a3f 00007f57ac0e0df0 00007f57ab166dc7 00007f57a7bf1d67 00007f57a7bf41b9 00007f57a6284b46 00007f57a251d314 00007f57a225ebed 0000000000000000
    @     0x7f57a6271e5f  SubmitSpinLockProfileData (yb/util/spinlock_profiling.cc:238)
    @     0x7f57a984612c  base::SpinLock::Unlock() (yb/gutil/spinlock.h:120)
    @     0x7f57a984612c  yb::simple_spinlock::unlock() (yb/util/locks.h:65)
    @     0x7f57a984612c  std::lock_guard<yb::simple_spinlock>::~lock_guard() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/mutex:383)
    @     0x7f57a984612c  yb::server::HybridClock::Now() (yb/server/hybrid_clock.cc:218)
    @     0x7f57abe24670  operator() (yb/tablet/mvcc.cc:173)
    @     0x7f57abe24670  wait<yb::tablet::MvccManager::DoGetSafeTime(yb::HybridTime, yb::MonoTime, yb::HybridTime, std::unique_lock<std::mutex>*) const::<lambda()> > (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/condition_variable:97)
    @     0x7f57abe24670  yb::tablet::MvccManager::DoGetSafeTime(yb::HybridTime, yb::MonoTime, yb::HybridTime, std::unique_lock<std::mutex>*) const (yb/tablet/mvcc.cc:190)
    @     0x7f57abe25df4  yb::tablet::MvccManager::SafeTime(yb::HybridTime, yb::MonoTime, yb::HybridTime) const (yb/tablet/mvcc.cc:161)
    @     0x7f57abdc8a3e  yb::tablet::Tablet::DoGetSafeTime(yb::StronglyTypedBool<yb::tablet::RequireLease_Tag>, yb::HybridTime, yb::MonoTime) const (yb/tablet/tablet.cc:1209)
    @     0x7f57ac0e0def  yb::tablet::AbstractTablet::SafeTime(yb::StronglyTypedBool<yb::tablet::RequireLease_Tag>, yb::HybridTime, yb::MonoTime) const (yb/tablet/abstract_tablet.h:72)
    @     0x7f57ac0e0def  yb::tserver::TabletServiceImpl::Read(yb::tserver::ReadRequestPB const*, yb::tserver::ReadResponsePB*, yb::rpc::RpcContext) (yb/tserver/tablet_service.cc:813)
    @     0x7f57ab166dc6  yb::tserver::TabletServerServiceIf::Handle(std::shared_ptr<yb::rpc::InboundCall>) (yb/tserver/tserver_service.service.cc:135)
    @     0x7f57a7bf1d66  yb::rpc::ServicePoolImpl::Handle(std::shared_ptr<yb::rpc::InboundCall>) (yb/rpc/service_pool.cc:191)
    @     0x7f57a7bf1d66  Run (yb/rpc/service_pool.cc:206)
    @     0x7f57a7bf1d66  Run (yb/rpc/tasks_pool.h:70)
    @     0x7f57a7bf41b8  Execute (yb/rpc/thread_pool.cc:98)
    @     0x7f57a6284b45  std::function<void ()>::operator()() const (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/functional:2271)
    @     0x7f57a6284b45  yb::Thread::SuperviseThread(void*) (yb/util/thread.cc:602)

Full output here:
contention.txt

Support local indexes

Implement local indexes.

Motivation

Provide better performance for queries with WHERE conditions that fully specify hash keys.

Requirements

A local secondary index is a distributed index. It has the following characteristics:

  • It is split and co-located with the data in the individual partitions
  • Index update is a local transaction (not distributed across nodes) and is very efficient.
  • Index lookup is a local operation (not a distributed txn) and is also very efficient
  • Designed for use as index of the data within the partition (i.e. local index). Thus the queries as expected to specify the entire hash partition key(s) to take advantage of this index.
  • Queries using the index alone (without specifying the hash partition keys) become full-table (cluster) scan

Initial plan is to implement this as:

  • Single indexed column only.
  • Can index on keys/elements of a set/list/map

Syntax

Auto-inferred when the index is created using the same hash partition keys as the table.

CREATE INDEX [IF NOT EXISTS] index_name
    ON [keyspace_name.]table_name ( (index_hash_column, ...), index_cluster_column, ... )
    [WITH CLUSTERING ORDER BY (index_cluster_column { ASC | DESC }, ...)]
    [COVERING (covered_column, ...)]
i((hash_columns (from original)), 

Note: we might not want/need to have covering columns, but I guess it can’t hurt (though I am currently not planning on using them in any way, though it shouldn’t be too hard to make use of them: just have the semantic analyzer understand if he index is enough to answer the query)
Note: as the local index already uses the hash keys from the original table as hash keys, we might want not to allow any additional hash keys, but allowing it should not break anything.

Allow additional:

  • t(h1 h2 r r2 c1 c2)
  • i((h1 h2), c1 r1 r2

Enable tservers to look up master addresses from Kubernetes master service cname

Currently we specify the actual pod names for the tserver_master_addrs parameter in tservers:
--tserver_master_addrs=master-0.yb-masters.default.svc.cluster.local:7100,master-1.yb-masters.default.svc.cluster.local:7100,master-2.yb-masters.default.svc.cluster.local:7100

We should let this be just the cname of the master service:
yb-masters.default.svc.cluster.local

Support ALTER KEYSPACE in CQL

Using express-cassandra to connect to our Cassandra service for a new sample app. However, faced this error.

$ node server.js 
WARN: KEYSPACE ALTERED! Run the `nodetool repair` command on each affected node.
Unhandled rejection Error: SQL error (yb/yql/cql/ql/ptree/process_context.cc:52): Invalid SQL Statement. syntax error, unexpected KEYSPACE, expecting LANGUAGE
ALTER KEYSPACE "mykeyspace" WITH REPLICATION = {'class': 'SimpleStrategy','replication_factor': '1'};
      ^^^^^^^^
 (error -11)
    at ResponseError.DriverError (/Users/schoudhury/dev/express-cassandra/node_modules/cassandra-driver/lib/errors.js:14:19)
    at new ResponseError (/Users/schoudhury/dev/express-cassandra/node_modules/cassandra-driver/lib/errors.js:51:24)
    at FrameReader.readError (/Users/schoudhury/dev/express-cassandra/node_modules/cassandra-driver/lib/readers.js:317:13)
    at Parser.parseBody (/Users/schoudhury/dev/express-cassandra/node_modules/cassandra-driver/lib/streams.js:194:66)
    at Parser._transform (/Users/schoudhury/dev/express-cassandra/node_modules/cassandra-driver/lib/streams.js:137:10)
    at Parser.Transform._read (_stream_transform.js:186:10)
    at Parser.Transform._write (_stream_transform.js:174:12)
    at doWrite (_stream_writable.js:387:12)
    at writeOrBuffer (_stream_writable.js:373:5)
    at Parser.Writable.write (_stream_writable.js:290:11)
    at Protocol.ondata (_stream_readable.js:639:20)
    at emitOne (events.js:116:13)
    at Protocol.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Protocol.Readable.push (_stream_readable.js:208:10)

CQL Delete with just the partition key is not supported

Here is my table schema:

CREATE TABLE kairosdb.row_keys ( metric text, row_time timestamp, data_type text, tags frozen<map<text, text>>, value text, PRIMARY KEY ((metric, row_time), data_type, tags) )

Here is the delete query:

DELETE FROM row_keys WHERE metric = ? AND row_time = ?

Here is the error I get:

Error in custom provider, com.datastax.driver.core.exceptions.SyntaxError: SQL error (yb/ql/ptree/process_context.cc:52): Invalid CQL Statement. Missing condition on key columns in WHERE clause
DELETE FROM row_keys WHERE metric = ? AND row_time = ?

I should think this would be a supported feature.

Seeing occasional high latency when running redis pipeline workload

Running some load against yugabyte primarily consisting of hmset/hmget operations.
Occasionally observing higher latencies. Found some occurrences of the following stack in the yb-tserver.INFO log file.

W0208 21:55:19.366822 46558 kernel_stack_watchdog.cc:139] Thread 46772 stuck at ../../../../../src/yb/rpc/outbound_call.cc:326 for 104ms:
Kernel stack:
[<ffffffff810f5464>] futex_wait_queue_me+0xc4/0x120
[<ffffffff810f5fd9>] futex_wait+0x179/0x280
[<ffffffff810f80de>] do_futex+0xfe/0x5b0
[<ffffffff810f8610>] SyS_futex+0x80/0x180
[<ffffffff81697749>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

User stack:

    @     0x7f57ab5bbe43  sys_futex (./src/base/linux_syscall_support.h:2097)
    @     0x7f57ab5bbe43  base::internal::SpinLockDelay(int volatile*, int, int) (./src/base/spinlock_linux-inl.h:88)
    @     0x7f57ab5bbd13  SpinLock::SlowLock() (src/base/spinlock.cc:133)
    @     0x7f57a5cc1156  SpinLock::Lock() (src/base/spinlock.h:71)
    @     0x7f57a5cc1156  SpinLockHolder::SpinLockHolder(SpinLock*) (src/base/spinlock.h:136)
    @     0x7f57a5cc1156  tcmalloc::ThreadCache::IncreaseCacheLimit() (src/thread_cache.cc:276)
    @     0x7f57a5cd2e11  tcmalloc::ThreadCache::Deallocate(void*, unsigned long) (src/thread_cache.h:392)
    @     0x7f57a5cd2e11  do_free_helper (src/tcmalloc.cc:1195)
    @     0x7f57a5cd2e11  do_free_with_callback (src/tcmalloc.cc:1228)
    @     0x7f57a5cd2e11  do_free (src/tcmalloc.cc:1234)
    @     0x7f57a5cd2e11  tc_deletearray (src/tcmalloc.cc:1665)
    @     0x7f57a6bd9f77  google::protobuf::internal::ArenaStringPtr::DestroyNoArena(std::string const*) (thirdparty/installed/uninstrumented/include/google/protobuf/arenastring.h:264)
    @     0x7f57a6bd9f77  yb::RedisKeyValueSubKeyPB::clear_subkey() (yb/common/redis_protocol.pb.cc:4357)
    @     0x7f57a6bda00e  yb::RedisKeyValueSubKeyPB::~RedisKeyValueSubKeyPB() (yb/common/redis_protocol.pb.cc:4321)
    @     0x7f57a6bda444  yb::RedisKeyValueSubKeyPB::~RedisKeyValueSubKeyPB() (yb/common/redis_protocol.pb.cc:4322)
    @     0x7f57a6bda444  google::protobuf::internal::GenericTypeHandler<yb::RedisKeyValueSubKeyPB>::Delete(yb::RedisKeyValueSubKeyPB*, google::protobuf::Arena*) (thirdparty/installed/uninstrumented/include/google/protobuf/repeated_field.h:623)
    @     0x7f57a6bda444  void google::protobuf::internal::RepeatedPtrFieldBase::Destroy<google::protobuf::RepeatedPtrField<yb::RedisKeyValueSubKeyPB>::TypeHandler>() (thirdparty/installed/uninstrumented/include/google/protobuf/repeated_field.h:1473)
    @     0x7f57a6bda444  google::protobuf::RepeatedPtrField<yb::RedisKeyValueSubKeyPB>::~RepeatedPtrField() (thirdparty/installed/uninstrumented/include/google/protobuf/repeated_field.h:1934)
    @     0x7f57a6bda444  yb::RedisKeyValuePB::~RedisKeyValuePB() (yb/common/redis_protocol.pb.cc:4680)
    @     0x7f57a6bdb35f  yb::RedisKeyValuePB::~RedisKeyValuePB() (yb/common/redis_protocol.pb.cc:4683)
    @     0x7f57a6bdb35f  yb::RedisReadRequestPB::SharedDtor() (yb/common/redis_protocol.pb.cc:2474)
    @     0x7f57a6bdb42e  yb::RedisReadRequestPB::~RedisReadRequestPB() (yb/common/redis_protocol.pb.cc:2470)
    @     0x7f57a6bdb480  yb::RedisReadRequestPB::~RedisReadRequestPB() (yb/common/redis_protocol.pb.cc:2471)
    @     0x7f57abb3ad08  std::default_delete<yb::RedisReadRequestPB>::operator()(yb::RedisReadRequestPB*) const (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/unique_ptr.h:76)
    @     0x7f57abb3ad08  std::unique_ptr<yb::RedisReadRequestPB, std::default_delete<yb::RedisReadRequestPB> >::~unique_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/unique_ptr.h:236)
    @     0x7f57abb3ad08  yb::client::YBRedisReadOp::~YBRedisReadOp() (yb/client/yb_op.cc:109)
    @     0x7f57abac2f58  std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:150)
    @     0x7f57abac2f58  std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:659)
    @     0x7f57abac2f58  std::__shared_ptr<yb::client::YBOperation, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:925)
    @     0x7f57abac2f58  std::shared_ptr<yb::client::YBOperation>::~shared_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr.h:93)
    @     0x7f57abac2f58  yb::client::internal::InFlightOp::~InFlightOp() (yb/client/in_flight_op.h:91)
    @     0x7f57abac2f58  void __gnu_cxx::new_allocator<yb::client::internal::InFlightOp>::destroy<yb::client::internal::InFlightOp>(yb::client::internal::InFlightOp*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/ext/new_allocator.h:124)
    @     0x7f57abac2f58  std::enable_if<std::__and_<std::allocator_traits<std::allocator<yb::client::internal::InFlightOp> >::__destroy_helper<yb::client::internal::InFlightOp>::type>::value, void>::type std::allocator_traits<std::allocator<yb::client::internal::InFlightOp> >::_S_destroy<yb::client::internal::InFlightOp>(std::allocator<yb::client::internal::InFlightOp>&, yb::client::internal::InFlightOp*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/alloc_traits.h:285)
    @     0x7f57abac2f58  void std::allocator_traits<std::allocator<yb::client::internal::InFlightOp> >::destroy<yb::client::internal::InFlightOp>(std::allocator<yb::client::internal::InFlightOp>&, yb::client::internal::InFlightOp*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/alloc_traits.h:414)
    @     0x7f57abac2f58  std::_Sp_counted_ptr_inplace<yb::client::internal::InFlightOp, std::allocator<yb::client::internal::InFlightOp>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:531)
    @     0x7f57abab5a84  std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:150)
    @     0x7f57abab5a84  std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:659)
    @     0x7f57abab5a84  std::__shared_ptr<yb::client::internal::InFlightOp, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:925)
    @     0x7f57abab5a84  std::shared_ptr<yb::client::internal::InFlightOp>::~shared_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr.h:93)
    @     0x7f57abab5a84  void std::_Destroy<std::shared_ptr<yb::client::internal::InFlightOp> >(std::shared_ptr<yb::client::internal::InFlightOp>*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/stl_construct.h:93)
    @     0x7f57abab5a84  void std::_Destroy_aux<false>::__destroy<std::shared_ptr<yb::client::internal::InFlightOp>*>(std::shared_ptr<yb::client::internal::InFlightOp>*, std::shared_ptr<yb::client::internal::InFlightOp>*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/stl_construct.h:103)
    @     0x7f57abab5a84  void std::_Destroy<std::shared_ptr<yb::client::internal::InFlightOp>*>(std::shared_ptr<yb::client::internal::InFlightOp>*, std::shared_ptr<yb::client::internal::InFlightOp>*) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/stl_construct.h:126)
    @     0x7f57abab5a84  void std::_Destroy<std::shared_ptr<yb::client::internal::InFlightOp>*, std::shared_ptr<yb::client::internal::InFlightOp> >(std::shared_ptr<yb::client::internal::InFlightOp>*, std::shared_ptr<yb::client::internal::InFlightOp>*, std::allocator<std::shared_ptr<yb::client::internal::InFlightOp> >&) (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/stl_construct.h:151)
    @     0x7f57abab5a84  std::vector<std::shared_ptr<yb::client::internal::InFlightOp>, std::allocator<std::shared_ptr<yb::client::internal::InFlightOp> > >::~vector() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/stl_vector.h:424)
    @     0x7f57abab5a84  yb::client::internal::AsyncRpc::~AsyncRpc() (yb/client/async_rpc.cc:103)
    @     0x7f57abab8c80  yb::client::internal::AsyncRpcBase<yb::tserver::ReadRequestPB, yb::tserver::ReadResponsePB>::~AsyncRpcBase() (yb/client/async_rpc.h:111)
    @     0x7f57abab8c80  yb::client::internal::ReadRpc::~ReadRpc() (yb/client/async_rpc.cc:464)
    @     0x7f57abab5914  std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:150)
    @     0x7f57abab5914  std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:659)
    @     0x7f57abab5914  std::__shared_ptr<yb::rpc::RpcCommand, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:925)
    @     0x7f57abab5914  std::__shared_ptr<yb::rpc::RpcCommand, (__gnu_cxx::_Lock_policy)2>::reset() (/home/centos/.linuxbrew-yb-build/Cellar/gcc/5.3.0/include/c++/5.3.0/bits/shared_ptr_base.h:1022)
    @     0x7f57abab5914  yb::client::internal::AsyncRpc::Finished(yb::Status const&) (yb/client/async_rpc.cc:137)

Kubernetes cname only tries the first ip address on GKE

I created a new Kubernetes cluster on Google compute. Here are the pods:

$ kubectl get pods
NAME           READY     STATUS    RESTARTS   AGE
yb-master-0    1/1       Running   0          26m
yb-master-1    1/1       Running   0          26m
yb-master-2    1/1       Running   0          26m
yb-tserver-0   1/1       Running   0          26m
yb-tserver-1   1/1       Running   0          26m
yb-tserver-2   1/1       Running   0          26m

The master leader is yb-master-1. It only shows on live tserver. I found the following logs on the other tservers:

[root@yb-tserver-0 yugabyte]# tail -f /mnt/data0/yb-data/tserver/logs/yb-tserver.INFO
I0213 23:19:10.830147    25 heartbeater.cc:319] Connected to a leader master server at yb-masters.default.svc.cluster.local:7100
W0213 23:19:10.870188    25 heartbeater.cc:98] Master address 'yb-masters.default.svc.cluster.local:7100' resolves to 3 different addresses. Using 10.4.2.5:7100
I0213 23:19:10.870285    25 heartbeater.cc:370] Registering TS with master...
I0213 23:19:10.870371    25 heartbeater.cc:376] Sending a full tablet report to master...
W0213 23:19:10.870934    25 heartbeater.cc:561] Failed to heartbeat to yb-masters.default.svc.cluster.local:7100: Service unavailable (yb/tserver/heartbeater.cc:473): master is no longer the leader tries=1283, num=1, masters=yb-masters.default.svc.cluster.local:7100, code=Service unavailable
I0213 23:19:11.873162    25 heartbeater.cc:319] Connected to a leader master server at yb-masters.default.svc.cluster.local:7100
W0213 23:19:11.874910    25 heartbeater.cc:98] Master address 'yb-masters.default.svc.cluster.local:7100' resolves to 3 different addresses. Using 10.4.2.5:7100
I0213 23:19:11.874974    25 heartbeater.cc:370] Registering TS with master...
I0213 23:19:11.875042    25 heartbeater.cc:376] Sending a full tablet report to master...
W0213 23:19:11.875373    25 heartbeater.cc:561] Failed to heartbeat to yb-masters.default.svc.cluster.local:7100: Service unavailable (yb/tserver/heartbeater.cc:473): master

We only seem to be trying the first ip address 10.4.2.5.

I get the following as the ips when I resolve the cnames:

[root@yb-tserver-0 yugabyte]# nslookup yb-masters.default.svc.cluster.local
Server:		10.7.240.10
Address:	10.7.240.10#53

Non-authoritative answer:
Name:	yb-masters.default.svc.cluster.local
Address: 10.4.2.5
Name:	yb-masters.default.svc.cluster.local
Address: 10.4.1.7
Name:	yb-masters.default.svc.cluster.local
Address: 10.4.0.7

ZRANGEBYSCORE .. WITHSCORES has return order reversed

YUGABYTE:

127.0.0.1:2017> ZADD MCCHANG_TEST_SET2 123 "asdf"
(integer) 1
127.0.0.1:2017> ZRANGEBYSCORE MCCHANG_TEST_SET2 -inf +inf WITHSCORES
1) "123.000000"
2) "asdf"

REDIS:

127.0.0.1:6379>  ZADD MCCHANG_TEST_SET2 123 "asdf"
(integer) 1
127.0.0.1:6379>  ZRANGEBYSCORE MCCHANG_TEST_SET2 -inf +inf WITHSCORES
1) "asdf"
2) "123"

Jedis expects the second value to be a double, and tries to parse as such, leading to exceptions as

Caused by: java.lang.NumberFormatException: For input string: "387a086f-6463-3b26-ae11-5fc6c54aaacf"
        at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
        at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
        at java.lang.Double.parseDouble(Double.java:538)
        at java.lang.Double.valueOf(Double.java:502)
        at redis.clients.jedis.BuilderFactory$15.build(BuilderFactory.java:267)
        at redis.clients.jedis.BuilderFactory$15.build(BuilderFactory.java:257)
        at redis.clients.jedis.Response.build(Response.java:61)
        at redis.clients.jedis.Response.get(Response.java:37)
       ...

Potential stale reads

Hi folks,

I've read the Building a Strongly Consistent Cassandra with Better Performance post and it seems under some conditions YugaByte may return stale data.

In the post written that YugaByte serves read request directly from a leader without consulting the other nodes:

Because of the use of the RAFT consensus protocol, the data held by the quorum leader is guaranteed to be consistent. So a read operation requires only a single read (1x) from the leader.

As far as I know, it isn't completely true and if the reads don't go through the Raft/Paxos rounds there are possibilities for returning stale data. For example, there was a similar bug in Etcd which forces them to introduce ?quorum=true parameter.

Let me describe the conditions which may lead to this behavior. As I understand you use leases to make sure that a leader is still a leader. So imagine that we have three nodes: A (leader), B (follower), C (follower) then the following events happen

  1. A client successfully writes key:=1.
  2. A network partitioning happens which isolates A from its peers.
  3. Just after partitioning the whole machine A freezes.
  4. On the B, C side of the network leases expire and B becomes a new leader.
  5. The client successfully writes key:=2
  6. The A machine unfreezes
  7. The client reads and the request lands node A
  8. From the A's perspective leases haven't expired yet (the local time was frozen) and it answers key=1
  9. The client receives an inconsistent result

Please point to an error in my reasoning if I got the situation wrong.

support brew-based install for macOS

support the following ways of managing a yugabyte-db installation on macOS
brew install yugabyte-db
brew upgrade yugabyte-db
brew cleanup yugabyte-db

A 3-nodes cluster of YugaByte doesn't tolerate an isolation of a single node

Hi folks,

I was testing how a 3-nodes cluster of YugaByte behaves (from a client perspective) when one of the nodes is isolated from the peers and observed an anomaly: the whole system became unavailable until a connection was restored.

Repo with Docker+JavaScript scripts to reproduce the problem: https://github.com/rystsov/perseus/tree/master/yugabyte

Testing environment

OS: Ubuntu 17.10
YugaByte: v0.9.1.0
NodeJS: v8.9.4
Redis's "redis" driver: "2.8.0"

A client executes three coroutines each of them connects to a corresponding YugaByte's node (yuga1, yuga2 and yuga3) and in a loop:

  1. Reads a value from a db
  2. Increments it
  3. Writes it back

Each coroutine uses a different key to avoid collisions.

The client aggregates a number of successful iterations and dumps it every second:

#legend: time|yuga1|yuga2|yuga3|yuga1:err|yuga2:err|yuga3:err
1	70	68	118	0	0	0	2018/01/24 04:27:40
2	82	85	141	0	0	0	2018/01/24 04:27:41

The first column is the number of passed seconds since the beginning of the experiment; the next three columns contain counter of successful iterations of the clients located on each node per last second, following three columns are counters of errors and the last one is time.

After I isolated a yuga3 node the system became unavailable at least for 366 seconds:

#legend: time|yuga1|yuga2|yuga3|yuga1:err|yuga2:err|yuga3:err
1	70	68	118	0	0	0	2018/01/24 04:27:40
2	82	85	141	0	0	0	2018/01/24 04:27:41
...
12	82	68	153	0	0	0	2018/01/24 04:27:51
13	93	81	159	0	0	0	2018/01/24 04:27:52
# isolating yuga3
# isolated yuga3
14	78	78	124	0	0	0	2018/01/24 04:27:53
15	0	0	0	0	0	0	2018/01/24 04:27:54
...
380	0	0	0	0	0	0	2018/01/24 04:34:00
381	0	0	0	0	0	0	2018/01/24 04:34:01
# rejoining yuga3
# rejoined yuga3
382	0	0	0	0	0	0	2018/01/24 04:34:02
383	0	0	0	0	0	0	2018/01/24 04:34:03
...
432	0	0	0	0	0	0	2018/01/24 04:34:52
433	0	0	0	0	0	0	2018/01/24 04:34:53
434	118	111	77	1	1	1	2018/01/24 04:34:54
435	219	166	80	0	0	0	2018/01/24 04:34:55

After I reestablished a connection, the system recovered in 51 seconds.

The logs: dead.tar.gz

Binding variables for lists do not work

I have the following code:

@RequestMapping(value = "/queryByIdAndDate" , method = POST)
public @ResponseBody String isoDateFormat(@RequestBody QueryTestRequest testRequest) {
     ImmutableMap<String, Object> parameters = ImmutableMap.of("ldids", ImmutableList.of(testRequest.getLdId()));
     Statement statement = new SimpleStatement("SELECT * FROM location_data.location_data WHERE ld_id IN :ldids", parameters);
     ResultSet resultSet = session.execute(statement); 
     List<Row> rows = resultSet.all();
     return String.valueOf(rows.size());
}

And kicking it off with:

curl -H 'Content-Type: application/json' -vvv --data '{"ldId": "0a44ea7e-2b26-40ff-b486-7b86c0853a08"}' http://localhost/api/locationUpdates/queryByIdAndDate

I get 0 as a response.

The problem is that the data exists in the table:

cqlsh:location_data> SELECT ld_id FROM location_data.location_data WHERE ld_id = 0a44ea7e-2b26-40ff-b486-7b86c0853a08 LIMIT 1;

 ld_id
--------------------------------------
 0a44ea7e-2b26-40ff-b486-7b86c0853a08

(1 rows)

Now, if I were to embed the list in the statement, instead of using bind variables, things work:

@RequestMapping(value = "/queryByIdAndDate" , method = POST)
public @ResponseBody String isoDateFormat(@RequestBody QueryTestRequest testRequest) {
	ImmutableMap<String, Object> parameters = ImmutableMap.of();
	String ldIdFragment = Joiner.on(",").join(ImmutableList.of(testRequest.getLdId()));
	Statement statement = new SimpleStatement("SELECT * FROM location_data.location_data WHERE ld_id IN ( " + ldIdFragment + " )", parameters);
	ResultSet resultSet = session.execute(statement);
	List<Row> rows = resultSet.all();
	return String.valueOf(rows.size());
}

With this code I now get 14 as a result, which matches the query in cqlsh:

cqlsh:location_data> SELECT ld_id FROM location_data.location_data WHERE ld_id = 0a44ea7e-2b26-40ff-b486-7b86c0853a08;

 ld_id
--------------------------------------
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08

(14 rows)

alter table to add a UDT column crashes yb-tserver my dev machine

So, I did an ALTER statement on a table, with a custom type.

Here is the short video of the scenario: https://asciinema.org/a/UVQcqu0ZFXvUoWNcEwxdNuj9x

Here is the log:

E1221 17:18:48.937922 196403200 process_context.cc:180] SQL Error: No Namespace Used
CREATE TYPE dagger ( junk DOUBLE );
            ^^^^^^
*** Aborted at 1513905573 (unix time) try "date -d @1513905573" if you are using GNU date ***
PC: @        0x108581084 yb::QLType::ToQLTypePB()
*** SIGSEGV (@0x8) received by PID 90780 (TID 0x70000b942000) stack trace: ***
    @     0x7fff579f0f5a _sigtramp
    @        0x10ec9a088 (unknown)
    @        0x108577a07 yb::ColumnSchemaToPB()
    @        0x107468df8 yb::client::YBTableAlterer::Data::ToRequest()
    @        0x107414f94 yb::client::YBTableAlterer::Alter()
    @        0x106bec73f yb::ql::Executor::ExecPTNode()
    @        0x106beb79c yb::ql::Executor::ExecTreeNode()
    @        0x106beae96 yb::ql::Executor::Execute()
    @        0x106bead38 yb::ql::Executor::ExecuteAsync()
    @        0x106b99f84 yb::ql::QLProcessor::RunAsync()
    @        0x106786ad2 yb::cqlserver::CQLProcessor::ProcessRequest()
    @        0x106785c05 yb::cqlserver::CQLProcessor::ProcessRequest()
    @        0x1067854a0 yb::cqlserver::CQLProcessor::ProcessCall()
    @        0x1067a1602 yb::cqlserver::CQLServiceImpl::Handle()
    @        0x108371218 yb::rpc::ServicePoolImpl::Handle()
    @        0x108371bc3 yb::rpc::TasksPool<>::WrappedTask::Run()
    @        0x108374e60 yb::rpc::(anonymous namespace)::Worker::Execute()
    @        0x10892c0ea yb::Thread::SuperviseThread()
    @     0x7fff579fa6c1 _pthread_body
    @     0x7fff579fa56d _pthread_start
    @     0x7fff579f9c5d thread_start

Encountered JedisConnectionException: Unexpected end of stream.

Occasionally, we are hitting this error stack in our client, and it seems like YugaByte server may not be respecting the keep alive timeout on the connection.

I was looking at the redis code, it seems that the client timeout is infinite - https://github.com/antirez/redis/blob/3.2/src/server.h#L83 can you confirm that YB redis interface behaves the same way?

redis.clients.jedis.exceptions.JedisConnectionException: Unexpected end of stream.
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:199)
at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:167)
at redis.clients.jedis.Protocol.read(Protocol.java:231)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:363)
at redis.clients.jedis.Connection.getAll(Connection.java:333)
at redis.clients.jedis.Connection.getAll(Connection.java:325)
at redis.clients.jedis.Pipeline.sync(Pipeline.java:99)

Bind variables are case insensitive

I have the following java code:

ImmutableMap<String, Object> parameters = ImmutableMap.of("ldId", testRequest.getLdId(), "device_created_on", testRequest.getDate());
Statement statement = new SimpleStatement("SELECT * FROM location_data.location_data WHERE ld_id = :ldId AND device_created_on = :device_created_on", parameters);
ResultSet resultSet = session.execute(statement);

When I kick this off, I get a error from the server:

Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: SQL error (yb/ql/ptree/process_context.cc:181): Invalid Arguments. SQL error (yb/ql/ptree/process_context.cc:181): Invalid Arguments. Runtime error (yb/cqlserver/cql_message.cc:66): Bind variable "ldid" not found
SELECT * FROM location_data.location_data WHERE ld_id = :ldId AND device_created_on = :device_created_on
^^^^^^
 (error -304)

Notice that the bind variable that it is looking for is ldid, while the variable name which I have in the statement is ldId.

When I make all variables lowercase, the query works.

"Operation timed out " exception when creating a table....

There is Operation timed out exception when performing "CREATE TABLE" as shown in the below image... But when running "DESCRIBE TABLES" , table is existing...

This is happening when I am running YB via "./yb-docker-ctl create" that is using docker...

image

Unable to create schema in a single node universe

Running on ubuntu yugabyte-0.9.1.0

I create a keyspace with replication factor of 1 and then when creating tables I get the following error:

SQL error (yb/yql/cql/ql/ptree/process_context.cc:181): Invalid Table Definition. Invalid argument (yb/common/wire_protocol.cc:148): Error creating table kairosdb.data_points on the master: Not enough live tablet servers to create a table with the requested replication factor 3. 1 tablet servers are alive.

If I try to connect using cqlsh I get the following:
./bin/cqlsh localhost
Connection error: ('Unable to connect to any servers', {'127.0.0.1': IndexError('list index out of range',)})

Call to mkstemp() failed with Invalid argument (error 22) (was: Unable to start 3-nodes cluster)

Hi folks,

I'm trying to start a 3-nodes cluster and yb-master fails with the following error:

Log file created at: 2018/01/23 07:19:33
Running on machine: yuga3
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F0123 07:19:33.281553    44 master_main.cc:82] Check failed: _s.ok() Bad status: IO error (yb/util/env_posix.cc:202): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed to open new log: Call to mkstemp() failed on name template /yuga/mem/yb-master/yb-data/master/wals/table-sys.catalog.uuid/tablet-00000000000000000000000000000000/.tmp.newsegmentXXXXXX: Invalid argument (error 22)

Environment
OS: Ubuntu 17.10
Docker: 17.12.0-ce
Host: macOS 10.13.2
YugaByte: 0.9.1.0

I followed this instructions to install YugaByte and then this page to start a cluster but I skipped the Prerequisites steps since they are CentOS 7 specific and I use Ubuntu.

Containers has the following mapping between hostnames and IPs:

yuga1: 172.31.0.4
yuga2: 172.31.0.2
yuga3: 172.31.0.3

On each container I executed the following command ($yuga* was set to a corresponding IP address):

/yuga/yugabyte-0.9.1.0/bin/yb-master --master_addresses $yuga1:7100,$yuga2:7100,$yuga3:7100 --fs_data_dirs "/yuga/mem/yb-master"

On yuga3 the command failed with the mentioned error. Please find the whole data dirs attached (including the logs).

Automatically detect if data/logs/wals are pointing to tmpfs and soft disable O_DIRECT instead of crashing

Recently we had a report (#17) of a docker run which has the data dirs pointing to a tmpfs path and ultimately crashed the masters:
F0123 07:19:33.281553 44 master_main.cc:82] Check failed: _s.ok() Bad status: IO error (yb/util/env_posix.cc:202): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed to open new log: Call to mkstemp() failed on name template /yuga/mem/yb-master/yb-data/master/wals/table-sys.catalog.uuid/tablet-00000000000000000000000000000000/.tmp.newsegmentXXXXXX: Invalid argument (error 22)

The crash happens because O_DIRECT is not actually supported on tmpfs!

As a proposed fix, we'll add some defensive code in our servers such that, if any of the paths pointed to for data/logs/wals are on a file system we cannot open a file with O_DIRECT, then we softly downgrade the durable_wal_write flag to false and print out a notice about this.

Failure running docker image

Here is the command and the output:
docker run --name yb-master-n1 --net yb-net yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n1:7100
F1121 04:38:32.989648 1 master_main.cc:78] Check failed: _s.ok() Bad status: Service unavailable (yb/server/hybrid_clock.cc:156): Cannot initialize clock: Error reading clock. ntp_gettime() failed: Operation not permitted
*** Check failure stack trace: ***
@ 0x7fe8c1fc026b DumpStackTraceAndExit (yb/util/logging.cc:146)
@ 0x7fe8c0f9804c google::LogMessage::Fail() (src/logging.cc:1478)
@ 0x7fe8c0f99f7c google::LogMessage::SendToLog() (src/logging.cc:1432)
@ 0x7fe8c0f97ba9 google::LogMessage::Flush() (src/logging.cc:1301)
@ 0x7fe8c0f9aa3e google::LogMessageFatal::~LogMessageFatal() (src/logging.cc:2013)
@ 0x405652 MasterMain (yb/master/master_main.cc:81)

*** Aborted at 1511239113 (unix time) try "date -d @1511239113" if you are using GNU date ***
PC: @ 0x7fe8bdeff832 __GI_abort
*** SIGSEGV (@0x0) received by PID 1 (TID 0x7fe8c8dc1a00) from PID 0; stack trace: ***
@ 0x7fe8be284110 (unknown)
@ 0x7fe8bdeff832 __GI_abort
@ 0x7fe8c1fc02bd yb::(anonymous namespace)::DumpStackTraceAndExit()
@ 0x7fe8c0f9804d google::LogMessage::Fail()
@ 0x7fe8c0f99f7d google::LogMessage::SendToLog()
@ 0x7fe8c0f97baa google::LogMessage::Flush()
@ 0x7fe8c0f9aa3f google::LogMessageFatal::~LogMessageFatal()
@ 0x405653 yb::master::MasterMain()
@ 0x7fe8bdeeaa85 __libc_start_main
@ 0x405153 (unknown)
@ 0x0 (unknown)

system_redis.redis should not be counted as an user table

system_redis.redis table is auto-created in most cases but in some cases it requires an explicit user action. Even in those cases, counting system_redis.redis table as a user table doesn't make sense since the functioning/use of that table is under the control of the system and not the end user.

See attached.
redis-table

Couldn't parse host home-linux warning

Running yugabyte-0.9.1.0 on ubuntu. I keep getting the following in the tserver log file

W1231 01:52:22.961686 11175 cql_server.cc:120] Couldn't parse host home-linux, error: Invalid argument

Issue with parsing dates

Folks,

I am having an issue with parsing dates. Please look at this example:

cqlsh:location_data> SELECT * FROM location_data WHERE ld_id =  8a9b98ca-3fcb-4922-b735-599c03905886 AND device_created_on <= '2017-12-21 00:00:01.000000+0000';

 ld_id | device_created_on | latitude | longitude | total_distance | moving_status | speed_units | speed_value | status | status_key | status_id | total_halt_time | is_long_halt | summary_update | asset_summary_update | raw_data
-------+-------------------+----------+-----------+----------------+---------------+-------------+-------------+--------+------------+-----------+-----------------+--------------+----------------+----------------------+----------

(0 rows)
cqlsh:location_data> SELECT * FROM location_data WHERE ld_id =  8a9b98ca-3fcb-4922-b735-599c03905886 AND device_created_on <= '2017-12-21 00:00:01.000+00';

 ld_id                                | device_created_on               | latitude   | longitude | total_distance | moving_status | speed_units | speed_value | status    | status_key | status_id | total_halt_time | is_long_halt | summary_update | asset_summary_update | raw_data
--------------------------------------+---------------------------------+------------+-----------+----------------+---------------+-------------+-------------+-----------+------------+-----------+-----------------+--------------+----------------+----------------------+-------------------------------
 8a9b98ca-3fcb-4922-b735-599c03905886 | 2017-12-21 00:00:01.000000+0000 | -119.00867 |  36.07972 |        0.02488 |             1 |        KM/H |          76 | At pickup |       2104 |    100164 |               0 |        False |          False |                False | { ..... yada yada yada .......}
 8a9b98ca-3fcb-4922-b735-599c03905886 | 2017-12-21 00:00:00.000000+0000 | -119.00866 |  36.07936 |              0 |             1 |        null |           0 | At pickup |       2104 |    100164 |               0 |        False |           True |                False |  { ..... yada yada yada .......}

(2 rows)

C# client for Redis

Is there any supported client for .NET for the Redis service?
I tried connecting with StackExchange.Redis but am recieving the following errors:

System.Exception : Could not connect to Redis at: 127.0.0.1:6379
---- StackExchange.Redis.RedisConnectionException : It was not possible to connect to the redis server(s); to create a disconnected multiplexer, disable AbortOnConnectFail. Unexpected response to PING: BulkString: 4 bytes

Server logs:

I0129 17:48:47.377908   263 client-internal.cc:1218] Skipping reinitialize of master addresses, no REST endpoint or file specified
E0129 17:48:47.382961   264 redis_service.cc:817] Command SUBSCRIBE not yet supported. Arguments: [SUBSCRIBE, __Booksleeve_MasterChanged]. Raw: SUBSCRIBE\r\n$26\r\n__Booksleeve_MasterChanged
W0129 17:48:47.445613    39 inbound_call.cc:100] Connection torn down before Redis Call from 172.18.0.1:53598 could send its response: Network error (yb/util/net/socket.cc:522): Recv() got EOF from remote (error 108)
E0129 17:57:27.660684   264 redis_service.cc:817] Command CLUSTER not yet supported. Arguments: [CLUSTER, NODES]. Raw: CLUSTER\r\n$5\r\nNODES
W0129 17:57:27.724321    38 inbound_call.cc:100] Connection torn down before Redis Call from 172.18.0.1:53836 could send its response: Network error (yb/util/net/socket.cc:522): Recv() got EOF from remote (error 108)

Support a JSON datatype

Eventually extend this type to perform:

  • #153: to add jsonb datatype
  • #154: create tables and select rows with the jsonb datatype
  • #162: filtering
  • #163: fine-grained selects
  • aggregations
  • local indexes

Plan as summarized by @pritamdamania87:

  • Use jsonb for the serialization format.
  • The jsonb blob would be stored as a value in DocDB. Initially this implementation would incur a read-modify-write overhead for updates. Although, we would have an optional flags field in front of the serialized jsonb which could be used to perform incremental updates later on. The merge can then be performed on the read path to avoid performance issues.
  • In terms of query language, we’d support the postgres syntax for the query language since we intend to have postgres support in the future and reuse some of this logic there.

ZREM on nonexistent key causes errors on subsequent ZADD operations

(integer) 1
127.0.0.1:2017> ZREM MCCHANG_TEST_SET1 "v1"
(integer) 1
127.0.0.1:2017> ZCARD MCCHANG_TEST_SET1
(integer) 0
127.0.0.1:2017> ZREM MCCHANG_TEST_SET2 "asdf"
(integer) 0
127.0.0.1:2017> ZADD MCCHANG_TEST_SET2 123 "asdf"
(error) Error: Something wrong
127.0.0.1:2017> ZCARD MCCHANG_TEST_SET2
(integer) 0```

MCCHANG_TEST_SET2 was undefined prior to this

Poor performance for queries which target multiple separate primary keys

Suppose we have the following table:

my_table
id_type, id_value, some_attribute

with primary key (id_type, id_value). There is no clustering column.

Queries of the form:

SELECT * FROM my_table WHERE id_type = 'foo' AND id_value = 'bar' and SELECT * FROM my_table WHERE id_type IN ('foo') AND id_value IN ('bar') seem to perform quite well.

However, with queries of the form SELECT * FROM my_table WHERE id_type = 'foo' AND id_value IN ( 'bar', 'baz') we run into performance problems. These queries tend to perform about 1000 times worse on our installation than single key queries.

My expectation is that these queries will perform at worst N times the single key query, where N is the number of expected primary keys to hit + some marginal overhead. This also assumes that these queries will execute serially to one another, which does not have to be the case.

Since I am querying against a table which has no clustering column, the maximum number of records which I can get is going to be the equal to the number of expected primary keys (it's a permutation of the key columns specified in the select statement).

Timezone issue with java.util.Date and cassandra driver

Hello.

I am seeing an unpleasant problem which have also now seen in mongo as well :) It is possible that I am wrong, but please bear with me.

In java the java.util.Date class represents an instant in time. It has no notion of a timezone, it is effectively a wrapper of the timestamp since the beginning of the Epoch. So, for example: a timestamp with the format of 2018-01-08T00:15:13Z appears in java like this (in the debugger):

screen shot 2018-01-11 at 11 57 57 am

The greatest sin of the java.util.Date is that its toString() implementation prints the LOCAL timezone instead of UTC. However, if one were to take the date object, issue a getTime(), they get the timestamp in milliseconds, which quickly produces the correct result when converted to human readable format:

screen shot 2018-01-11 at 11 32 14 am

Unfortunately, when this is sent for writing to yugabyte, yugabyte records wrong time. You will see that the results are stored like this:

screen shot 2018-01-11 at 12 04 04 pm

You can see that the time has now shifted with 8 hours and the timezone stored is UTC, i.e. the driver or yugabyte interpreted my date as if the date is set in PST (eight hours behind UTC, and added 8 hours to make it UTC).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.