Giter Site home page Giter Site logo

opentenbase / opentenbase Goto Github PK

View Code? Open in Web Editor NEW
108.0 108.0 82.0 70.79 MB

OpenTenBase is an enterprise-level distributed HTAP open source database.

Home Page: https://opentenbase.org

License: Other

Emacs Lisp 0.01% Makefile 0.58% M4 0.16% Shell 0.29% C 91.49% PLpgSQL 3.84% Yacc 1.12% Lex 0.35% Perl 1.50% Ruby 0.39% Python 0.03% Assembly 0.01% Roff 0.05% sed 0.01% DTrace 0.01% XS 0.01% PLSQL 0.14% Batchfile 0.02%

opentenbase's Introduction

logo


OpenTenBase Database Management System

OpenTenBase is an advanced enterprise-level database management system based on prior work of Postgres-XL project. It supports an extended subset of the SQL standard, including transactions, foreign keys, user-defined types and functions. Additional, it adds parallel computing, security, management, audit and other functions.

OpenTenBase has many language interfaces similar to PostgreSQL, many of which are listed here:

https://www.postgresql.org/download

Overview

A OpenTenBase cluster consists of multiple CoordinateNodes, DataNodes, and GTM nodes. All user data resides in the DataNode, the CoordinateNode contains only metadata, the GTM for global transaction management. The CoordinateNodes and DataNodes share the same schema.

Users always connect to the CoordinateNodes, which divides up the query into fragments that are executed in the DataNodes, and collects the results.

The latest version of this software may be obtained at:

https://github.com/OpenTenBase/OpenTenBase

For more information look at our website located at:

https://www.opentenbase.org/

Building

System Requirements:

Memory: 4G RAM minimum

OS: TencentOS 2, TencentOS 3, OpenCloudOS, CentOS 7, CentOS 8, Ubuntu

Dependence

yum -y install gcc make readline-devel zlib-devel openssl-devel uuid-devel bison flex

or

apt install -y gcc make libreadline-dev zlib1g-dev libssl-dev libossp-uuid-dev bison flex

Create User 'opentenbase'

mkdir /data
useradd -d /data/opentenbase -s /bin/bash -m opentenbase # add user opentenbase
passwd opentenbase # set password

Building

git clone https://github.com/OpenTenBase/OpenTenBase

export SOURCECODE_PATH=/data/opentenbase/OpenTenBase
export INSTALL_PATH=/data/opentenbase/install

cd ${SOURCECODE_PATH}
rm -rf ${INSTALL_PATH}/opentenbase_bin_v2.0
chmod +x configure*
./configure --prefix=${INSTALL_PATH}/opentenbase_bin_v2.0 --enable-user-switch --with-openssl --with-ossp-uuid CFLAGS=-g
make clean
make -sj
make install
chmod +x contrib/pgxc_ctl/make_signature
cd contrib
make -sj
make install

Notice: if you use Ubuntu and see initgtm: command not found while doing "init all", you may add ${INSTALL_PATH}/opentenbase_bin_v2.0/bin to /etc/environment

Installation

Use PGXC_CTL tool to build a cluster, for example: a cluster with a global transaction management node (GTM), a coordinator(COORDINATOR) and two data nodes (DATANODE).

topology

Preparation

  1. Install pgxc and import the path of pgxc installation package into environment variable.

    PG_HOME=${INSTALL_PATH}/opentenbase_bin_v2.0
    export PATH="$PATH:$PG_HOME/bin"
    export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$PG_HOME/lib"
    export LC_ALL=C
  2. Disable SELinux and firewall (optional)

    vi /etc/selinux/config # set SELINUX=disabled
    # Disable firewalld
    systemctl disable firewalld
    systemctl stop firewalld
    
  3. Get through the SSH password free login between the machines where the cluster node is installed, and then deploy and init will SSH to the machines of each node. After getting through, you do not need to enter the password.

    ssh-keygen -t rsa
    ssh-copy-id -i ~/.ssh/id_rsa.pub destination-user@destination-server
    

Cluster startup steps

  1. Generate and fill in configuration file pgxc_ctl.conf. pgxc_ctl tool can generate a template for the configuration file. You need to fill in the cluster node information in the template. After the pgxc_ctl tool is started, pgxc_ctl directory will be generated in the current user's home directory. After entering " prepare config" command, the configuration file template that can be directly modified will be generated in pgxc_ctl directory.

    • The pgxcInstallDir at the beginning of the configuration file refers to the installation package location of pgxc. The database user can set it according to his own needs.
    pgxcInstallDir=${INSTALL_PATH}/opentenbase_bin_v2.0
    
    • For GTM, you need to configure the node name, IP, port and node directory.
    #---- GTM ----------
    gtmName=gtm
    gtmMasterServer=xxx.xxx.xxx.1
    gtmMasterPort=50001
    gtmMasterDir=${GTM_MASTER_DATA_DIR}/data/gtm_master
    
    • If you do not need gtmSlave, you can directly set it to 'n' in the configuration of the corresponding node.
    gtmSlave=n
    

    If you need gtmSlave, configure it according to the instructions in the configuration file.

    • Coordination node, which needs to be configured with IP, port, directory, etc.
    coordNames=(cn001)
    coordMasterCluster=(opentenbase_cluster)
    coordPorts=(30004)
    poolerPorts=(30014)
    coordPgHbaEntries=(0.0.0.0/0)
    coordMasterServers=(xxx.xxx.xxx.2)
    coordMasterDirs=(${COORD_MASTER_DATA_DIR}/data/cn_master/cn001)
    
    • Data node, similar to the above nodes: IP, port, directory, etc. (since there are two data nodes, you need to configure the same information as the number of nodes.)
    primaryDatanode=dn001
    datanodeNames=(dn001 dn002)
    datanodePorts=(20008 20009)
    datanodePoolerPorts=(20018 20019)
    datanodeMasterCluster=(opentenbase_cluster opentenbase_cluster)
    datanodePgHbaEntries=(0.0.0.0/0)
    datanodeMasterServers=(xxx.xxx.xxx.3 xxx.xxx.xxx.4)
    datanodeMasterDirs=(${DATANODE_MASTER_DATA_DIR}/data/dn_master/dn001 ${DATANODE_MASTER_DATA_DIR}/data/dn_master/dn002)
    

    There are coordSlave and datanodeSlave corresponding to the coordination node and data node. If not, configure them as 'n'; otherwise, configure them according to the configuration file.

    In addition, two type ports: poolerPort and port, need to be configured for coordinator node and datanode. poolerPort is used by nodes to communicate with other nodes. port is the port used to login to the node. Here, poolerPort and port must be configured differently, otherwise there will be conflicts and the cluster cannot be started.

    Each node needs to have its own directory and cannot be created in the same directory.

  2. Distribution of installation package(deploy all). After filling in the configuration file, run the pgxc_ctl tool,and then input "deploy all" command to distribute the installation package to the IP machine of each node. topology

  3. Initialize each node of the cluster(init all). After the distribution of the installation package is completed, input "init all" command in pgxc_ctl tool to initialize all the nodes in the configuration file pgxc_ctl.conf and start the cluster. So far, the cluster has been started. topology

Usage

$ psql -h ${CoordinateNode_IP} -p ${CoordinateNode_PORT} -U ${pgxcOwner} -d postgres

postgres=# create default node group default_group  with (dn001,dn002);
CREATE NODE GROUP
postgres=# create sharding group to group default_group;
CREATE SHARDING GROUP
postgres=# create table foo(id bigint, str text) distribute by shard(id);

References

https://docs.opentenbase.org/

Who are using OpenTenBase

Tencent

License

The OpenTenBase is licensed under the BSD 3-Clause License. Copyright and license information can be found in the file LICENSE.txt

Contributors

Thanks for all contributors here: CONTRIBUTORS

News and Events

Latest
Special Review of Cloud Native Open Source Project Application Practice

Blogs and Articals

Blogs and Articals
Quick Start

History

history_events

opentenbase's People

Contributors

aidenma-develop avatar alvherre avatar anarazel avatar bartdong avatar bethding-database avatar dafoerx avatar dong-dong6 avatar enkilee avatar gh-devin avatar hlinnaka avatar jennyjennychen avatar jiengup avatar kkt-tc avatar marklightning avatar michaelpq avatar payi0ad avatar qiannzhang avatar robertmhaas avatar s36326 avatar simonat2ndquadrant avatar smallcookie086 avatar tglsfdc avatar yazun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentenbase's Issues

[Advanced][Adapted]Adapt to AI vector database and expand OpenTenBase ecosystem

In the era of large AI models, help OpenTenBase adapt to vector database plug-ins and enrich the OpenTenBase ecosystem.
AI大模型时代下,帮助OpenTenBase适配向量数据库插件,丰富OpenTenBase生态

First Priority:
Target:pgvecto.rs is compatible and adapted to OpenTenBase,pgvecto.rs is a Postgres vector similarity search plugin written in Rust. Its HNSW algorithm is 20 times faster than pgvector at 90% recall. https://github.com/tensorchord/pgvecto.rs

Requirements: Install pgvecto.rs and OpenTenBase to ensure compatibility and error-free adaptation.
要求:安装pgvecto.rs与opentenbase,做到兼容适配无错误

Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

要求:要在本地安装测试完成准确无误,同时编写文档贡献给OpenTenBase社区

Extended tasks:
Target:PostgresML compatible adaptation, https://github.com/postgresml/postgresml
目标:PostgresML 兼容适配,https://github.com/postgresml/postgresml

[Advanced][Develop]Adapte PostGIS to OpenTenBase

PostGIS is a geographically related information component. Currently, the stand-alone component already supports PostGIS components and needs to be adapted to the distributed database transformation.

  • Goal: Modify the version of PostGIS adapted to PG10 to adapt it to OpenTenBase.
  • Requirements: Solve all errors and run through all regress use cases.

PostGIS是地理相关信息组件,目前单机组件已经支持PostGIS组件,需要适配分布式数据库改造。

  • 目标:基于PostGIS在PG10适配的版本进行修改,使其适配OpenTenBase。
  • 要求:解决所有报错,跑通所有regress用例。

[Intermediate][Observable] OpenTenBase monitoring based on Grafana/Prometheus

Use Grafana/Prometheus to monitor OpenTenBase and output documents to contribute to the community.

  • Goal: OpenTenBase monitoring based on Grafana/Prometheus and submit PR to community docs
  • Requirements: Test run-through locally and have final monitoring screenshots
  • 目标:使用Grafana/Prometheus对OpenTenBase进行监控,并输出文档提交pr到社区docs
  • 要求:在本地测试跑通并且有最终监控截图

[Advanced][Develop]Read data from COS

Read data from COS.

Goal: Big data integration, and direct access to remote COS, and other storage platforms.
Requirements: The data is not written to disk and can be queried and operated directly through SQL commands.

在开源OpenTenBase实现读取COS存储平台上的数据。

目标:大数据大融合,直接链接远程COS存储平台。
要求:数据不落盘,可以直接通过SQL命令查询、操作。

command not found

Question (问题)

Following this tutorial, I have compiled source code, finished configure-file, such as: ssh, .bashrc, .pgxc_ctl.conf. In /usr/bin/bash, I can run command like pg_ctl. But running init all command through pgxc_ctl tool can cause: bash: line 1: pg_ctl: command not found.

我已经根据 官方指导 编译源码,配置好了 .ssh.bashrc.pgxc_ctl.conf 等配置文件,在 /usr/bin/bash 中手动输入 pg_ctl 等命令成功,但是通过 pgxc_ctl 工具运行 init all,会显示:bash: line 1: pg_ctl: command not found

screenshot (截图)

1708413056916

goal (求助)

how to deal with it, thx.
请问如何解决

[Advanced][Develop]Read data from Redis

Read data from Redis.

Goal: Big data integration, and direct access to remote Redis, and other storage platforms.
Requirements: The data is not written to disk and can be queried and operated directly through SQL commands.

在开源OpenTenBase实现读取Redis存储平台上的数据。

目标:大数据大融合,直接链接远程Redis存储平台。
要求:数据不落盘,可以直接通过SQL命令查询、操作。

[Intermediate] [Adapted]Apache SeaTunnel and OpenTenBase compatible adaptation

Goal: Adapt Apache SeaTunnel and use OpenTenBase as the data source
目标:适配Apache SeaTunnel ,将OpenTenBase作为数据源

Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

要求:要在本地安装测试完成准确无误,同时编写文档贡献给OpenTenBase社区

Apache SeaTunnel:Next-generation high-performance,distributed, massive data integration tool.
https://seatunnel.incubator.apache.org/

[Advanced][Develop]Read data from MySQL

Read data from MySQL

  • Goal: Big data integration, direct access to remote MySQL
  • Requirements: The data is not written to disk and can be queried and operated directly through SQL commands.

在开源OpenTenBase实现读取MySQL上的数据。

  • 目标:大数据大融合,直接链接远程MySQL。
  • 要求:数据不落盘,可以直接通过SQL命令查询、操作。

[Advanced][Develop]Read data from HDFS

Read data from HDFS

Goal: Big data integration, and direct access to remote HDFS
Requirements: The data is not written to disk and can be queried and operated directly through SQL commands.

在开源OpenTenBase实现读取HDFS上的数据。

目标:大数据大融合,直接链接远程HDFS。
要求:数据不落盘,可以直接通过SQL命令查询、操作。

[Intermediate][Deployment] Make rpm for OpenTenBase installation.

Make rpm for OpenTenBase installation.

Goal: Make rpm for OpenTenBase installation.
Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

目标:制作一个OpenTenBase的rpm安装包。
要求:要在本地安装测试完成准确无误,同时编写文档贡献给社区

build doc error, /usr/bin/osx:func.sgml:19849:11:E: document type does not allow element "ROW" here

When I installed

apt install docbook docbook-xml docbook-xsl fop libxml2-utils opensp xsltproc

dependencies according to doc/src/sgml/docguide.sgml, 270 - 272 lines

<programlisting>
apt-get install docbook docbook-xml docbook-xsl fop libxml2-utils opensp xsltproc
</programlisting>

I executed make html, which gave me this error, and I don't know how to solve it. This is the error screenshot.
image

[Intermediate] [Adapted]PG Admin and OpenTenBase compatible adaptation

Target:PG Admin and OpenTenBase compatible adaptation
Requirements: Install PG Admin and OpenTenBase to ensure compatibility and error-free adaptation

要求:安装PG Admin与OpenTenBase,做到兼容适配无错误:)

Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

要求:要在本地安装测试完成准确无误,同时编写文档贡献给OpenTenBase社区

OLAP leading to SIGSEV based crashes

With set prefer_olap = 'on' we observe process crashes in running TPC-H benchmark queries (for instance Q2) already at scale factor 10 in parallel with more than 10 clients on a single coordinator. The time until occurance of a crash strongly reduces with the number of clients. With more than 200 we observe them already after a few seconds. (If useful, we can provide you directly with scripts to reproduce this issue.)

It seems that memory gets corrupted. During a crash, always the first element of the memory freelist points to a non-accessible region (here to 0x10):

freelist = {0x0, 
    0x10, 0x0, 0x0, 0x0, 0x7fca55abbfd0, 0x0, 0x0, 0x0, 0x23fae98, 0x0}

This results in a SIGSEV in the memory allocation.

Stack trace:

#0  AllocSetAlloc (context=0x238ef18, size=16) at aset.c:707
#1  0x0000000000990f78 in palloc (size=size@entry=16) at mcxt.c:935
#2  0x0000000000724bb4 in new_list (type=type@entry=T_IntList) at list.c:68
#3  0x0000000000724d45 in lappend_int (list=list@entry=0x0, datum=4) at list.c:151
#4  0x0000000000677d56 in ExecInitQual (qual=<optimized out>, parent=parent@entry=0x24d0378) at execExpr.c:206
#5  0x000000000069d432 in ExecInitIndexScan (node=node@entry=0x24151a0, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at nodeIndexscan.c:931
#6  0x0000000000684f76 in ExecInitNode (node=0x24151a0, estate=estate@entry=0x23f65a0, eflags=1) at execProcnode.c:225
#7  0x00000000006a6418 in ExecInitNestLoop (node=node@entry=0x2414620, estate=estate@entry=0x23f65a0, eflags=<optimized out>, eflags@entry=1)
    at nodeNestloop.c:338
#8  0x00000000006850aa in ExecInitNode (node=0x2414620, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at execProcnode.c:298
#9  0x00000000006a63f6 in ExecInitNestLoop (node=node@entry=0x2414190, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at nodeNestloop.c:333
#10 0x00000000006850aa in ExecInitNode (node=0x2414190, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at execProcnode.c:298
#11 0x00000000006a63f6 in ExecInitNestLoop (node=node@entry=0x24132d8, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at nodeNestloop.c:333
#12 0x00000000006850aa in ExecInitNode (node=0x24132d8, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at execProcnode.c:298
#13 0x000000000069116b in ExecInitAgg (node=node@entry=0x24131c0, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at nodeAgg.c:3911
#14 0x000000000068512e in ExecInitNode (node=0x24131c0, estate=estate@entry=0x23f65a0, eflags=eflags@entry=1) at execProcnode.c:331
#15 0x00000000006df01a in ExecShutdownRemoteSubplan (node=node@entry=0x23f71d0) at execRemote.c:11373
#16 0x0000000000684e11 in ExecShutdownNode (node=0x23f71d0) at execProcnode.c:873
#17 0x00000000007247cf in planstate_tree_walker (planstate=planstate@entry=0x23f6bc8, walker=walker@entry=0x684d9d <ExecShutdownNode>, 
    context=context@entry=0x0) at nodeFuncs.c:3784
#18 0x0000000000684dc5 in ExecShutdownNode (node=0x23f6bc8) at execProcnode.c:856
#19 0x00000000007205b6 in planstate_walk_subplans (plans=<optimized out>, walker=walker@entry=0x684d9d <ExecShutdownNode>, context=context@entry=0x0)
    at nodeFuncs.c:3864
#20 0x0000000000724837 in planstate_tree_walker (planstate=planstate@entry=0x245f370, walker=walker@entry=0x684d9d <ExecShutdownNode>, 
    context=context@entry=0x0) at nodeFuncs.c:3844
#21 0x0000000000684dc5 in ExecShutdownNode (node=0x245f370) at execProcnode.c:856
#22 0x00000000007247cf in planstate_tree_walker (planstate=planstate@entry=0x245eed8, walker=walker@entry=0x684d9d <ExecShutdownNode>, 
    context=context@entry=0x0) at nodeFuncs.c:3784
#23 0x0000000000684dc5 in ExecShutdownNode (node=node@entry=0x245eed8) at execProcnode.c:856
#24 0x000000000067ee42 in ExecutePlan (estate=estate@entry=0x23f65a0, planstate=0x245eed8, use_parallel_mode=<optimized out>, 
    operation=operation@entry=CMD_SELECT, sendTuples=sendTuples@entry=1 '\001', numberTuples=numberTuples@entry=0, direction=ForwardScanDirection, 
    dest=0x22c3948, execute_once=1 '\001') at execMain.c:2063
#25 0x000000000067f0a9 in standard_ExecutorRun (queryDesc=0x2313a50, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at execMain.c:466
#26 0x000000000067f163 in ExecutorRun (queryDesc=queryDesc@entry=0x2313a50, direction=direction@entry=ForwardScanDirection, count=count@entry=0, 
    execute_once=<optimized out>) at execMain.c:409
#27 0x0000000000861ed9 in PortalRunSelect (portal=portal@entry=0x2260510, forward=forward@entry=1 '\001', count=0, count@entry=9223372036854775807, 
    dest=dest@entry=0x22c3948) at pquery.c:1722
#28 0x000000000086438a in PortalRun (portal=portal@entry=0x2260510, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
    run_once=<optimized out>, dest=dest@entry=0x22c3948, altdest=altdest@entry=0x22c3948, completionTag=0x7ffe6a2b6b50 "") at pquery.c:1362
#29 0x000000000085fb15 in exec_execute_message (portal_name=portal_name@entry=0x22c3530 "p_1_1dfd6c_2_79f38aea", max_rows=9223372036854775807, 
    max_rows@entry=0) at postgres.c:3065
#30 0x0000000000860c65 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x20853d0, dbname=<optimized out>, username=<optimized out>) at postgres.c:5645
#31 0x00000000007d3a48 in BackendRun (port=port@entry=0x20fb6b0) at postmaster.c:5034
#32 0x00000000007d5b3f in BackendStartup (port=port@entry=0x20fb6b0) at postmaster.c:4706
#33 0x00000000007d5d41 in ServerLoop () at postmaster.c:1963
#34 0x00000000007d7058 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x20835a0) at postmaster.c:1571
#35 0x000000000072052f in main (argc=5, argv=0x20835a0) at main.c:233

The database itself throws error messages like this:

ERROR:  Failed to receive more data from data node 16394
WARNING:  combiner is not prepared for instrumentation
WARNING:  pgxc_abort_connections dn node:dn6 invalid socket 4294967295!
ERROR:  node:dn2, backend_pid:4190542, nodename:dn1,backend_pid:3367739,message:Failed to receive more data from data node 16394
ERROR:  Failed to receive more data from data node 16394
WARNING:  pgxc_abort_connections dn node:dn6 invalid socket 4294967295!

[Advanced][Adapted]Imports OpenStreetMap (OSM) data into a OpenTenBase/PostGIS database

Target:Osm2pgsql imports OpenStreetMap (OSM) data into a PostgreSQL/PostGIS database. It is an essential part of many rendering toolchains, the Nominatim geocoder and other applications processing OSM data.

Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

要求:要在本地安装测试完成准确无误,同时编写文档贡献给OpenTenBase社区

Adapted to OSM2PGSQL and OpenTenBase, please see the project official website for details.
https://osm2pgsql.org/
https://github.com/osm2pgsql-dev/osm2pgsql

[Intermediate][Deployment] Make OpenTenBase container image

Goal: Use Docker to create an OpenTenBase container image.
Requirements: The local installation and testing must be completed accurately, and documents must be written and contributed to the community.

目标:利用Docker制作一个OpenTenBase的容器镜像。
要求:要在本地安装测试完成准确无误,同时编写文档贡献给社区

[Advanced][Develop]Read data from Ceph

Read data from Ceph.

Goal: Big data integration, and direct access to remote Ceph, and other storage platforms.
Requirements: The data is not written to disk and can be queried and operated directly through Ceph commands.

在开源OpenTenBase实现读取Ceph存储平台上的数据。

目标:大数据大融合,直接链接远程Ceph存储平台。
要求:数据不落盘,可以直接通过SQL命令查询、操作。

[Advanced][Develop]Make the OpenTenBase partition table Oracle syntax compatible

Make the OpenTenBase partition table Oracle syntax compatible, including but not limited to: alter table main table truncate partition (sub table); select xxx from main table partition (sub table) where xxx; etc.

  • Goal: Switch from Oracle to OpenTenBase at the minimum cost, and be compatible with Oracle syntax and run correctly on the OpenTenBase architecture.

使OpenTenBase分区表Oracle语法兼容,包括但不限于:alter table 主表 truncate partition (子表); select xxx from 主表 partition(子表) where xxx;等。

  • 目标:最小代价从Oracle切换至OpenTenBase,兼容Oracle语法在OpenTenBase架构上正确运行。

部署tbase报错

集群配置:3台机器,每台机器4c,8G内存,centos7系统。
部署过程中报错:
fixing permissions on existing directory /home/tbase/data/coord ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... 2024-01-07 14:46:23.019 CST [21137,coord(0.0)] FATAL: could not map anonymous shared memory: Cannot allocate memory
2024-01-07 14:46:23.019 CST [21137,coord(0.0)] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 1168652816 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
child process exited with exit code 1
initdb: removing contents of data directory "/home/tbase/data/coord"

[Advanced][Develop]Find historical SQL with execution time greater than the set value.

Find historical SQL with execution time greater than the set value.

  • Goal: Find historical SQL that is greater than 10 seconds and perform targeted optimization.
  • Requirements: To read from the database kernel, you need to use a plug-in to record the start-end time of SQL. If it is greater than the expected time, record it.

在开源OpenTenBase中实现查询执行时间大于设定值的历史SQL。

  • 目标:找出大于10秒的历史SQL,进行定向优化。
  • 要求:从数据库内核中读取,需要通过插件方式,记录SQL开始-结束的时间,如果大于预期时间,要进行记录

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.