Giter Site home page Giter Site logo

pgspider / griddb_fdw Goto Github PK

View Code? Open in Web Editor NEW
13.0 13.0 15.0 1.72 MB

GridDB Foreign Data Wrapper for PostgreSQL

License: Other

Makefile 0.05% C 12.03% Shell 0.06% PLpgSQL 87.87%
fdw foreign-data-wrapper foreign-tables griddb griddb-fdw postgresql postgresql-extension

griddb_fdw's Introduction

# PGSpider
PGSpider is High-Performance SQL Cluster Engine for distributed big data.  
PGSpider can access a number of data sources using Foreign Data Wrapper(FDW) and retrieves the distributed data source vertically.  
Usage of PGSpider is the same as PostgreSQL except its program name is `pgspider` and default port number is `4813`. You can use any client applications such as libpq and psql.

## Features
* Multi-Tenant  
    User can get records in multi tables by one SQL easily.  
    If there are tables with similar schema in each data source, PGSpider can view them as a single virtual table: We call it as Multi-Tenant table.  

* Modification  
    User can modify data at Multi-Tenant table by using INSERT/UPDATE/DELETE query.  
    For INSERT feature, PGSpider will use round robin method to choose 1 alive node that supports INSERT feature and is the next to the previous target as rotation to INSERT data.  
    For UPDATE/DELETE feature, PGSpider will execute UPDATE/DELETE at all alive nodes that support UPDATE/DELETE feature.  
    PGSpider supports both Direct and Foreign Modification.  
    PGSpider supports bulk INSERT by using batch_size option.  
    - If user specifies batch size option, we can get batch size from foreign table or foreign server option.
    - If batch_size is 1, tell child nodes to execute simple insert. Otherwise, execute batch insert if child node can do.
    - If batch_size is not specified by user on multi-tenant table, automatically calculation based on batch size of child tables and the number of child nodes using LCM method (Least Common Multiple).  
      If batch size is too large, we use the limited value (6553500) as batch size.  
      PGSpider distributes records to data sources evenly, not only for one query but also for many queries.

* Parallel processing  
    PGSpider executes queries and fetches results from child nodes in parallel.  
    PGSpider expands Multi-Tenant table to child tables, creates new threads for each child table to access corresponding data source.

* Pushdown   
    WHERE clause and aggregation functions are pushed down to child nodes.  
    Pushdown to Multi-tenant tables occur error when AVG, STDDEV and VARIANCE are used.  
    PGSPider improves this error, PGSpider can execute them.
  
* Data Compression Transfer
    PGSpider support transferring data to other datasource via Cloud Function.  
    Data will be compressed, transmitted to Cloud Function, and then transfered to data source.  
    This feature helps PGSpider control and reduce the size of transferred data between PGSpider and destination data source, lead to reduce the usage fee on cloud service

## How to build PGSpider

Clone PGSpider source code.
<pre>
git clone https://github.com/pgspider/pgspider.git
</pre>

Build and install PGSpider and extensions.
<pre>
cd pgspider
./configure
make
sudo make install
cd contrib/pgspider_core_fdw
make
sudo make install
cd ../pgspider_fdw
make
sudo make install
</pre>

Default install directory is /usr/local/pgspider.

## Usage
For example, we will create 2 different child nodes, SQLite and PostgreSQL. They are accessed by PGSpider as root node.
Please install SQLite and PostgreSQL for child nodes. 

After that, we install PostgreSQL FDW and SQLite FDW into PGSpider. 

Install SQLite FDW 
<pre>
cd ../
git clone https://github.com/pgspider/sqlite_fdw.git
cd sqlite_fdw
make
sudo make install
</pre>
Install PostgreSQL FDW 
<pre>
cd ../postgres_fdw
make
sudo make install
</pre>

### Start PGSpider
PGSpider binary name is same as PostgreSQL.  
Default install directory is changed. 
<pre>
/usr/local/pgspider
</pre>

Create database cluster and start server.
<pre>
cd /usr/local/pgspider/bin
./initdb -D ~/pgspider_db
./pg_ctl -D ~/pgspider_db start
./createdb pgspider
</pre>

Connect to PGSpider.
<pre>
./psql pgspider
</pre>

### Load extension
PGSpider (Parent node)
<pre>
CREATE EXTENSION pgspider_core_fdw;
</pre>

PostgreSQL, SQLite (Child node)
<pre>
CREATE EXTENSION postgres_fdw;
CREATE EXTENSION sqlite_fdw;
</pre>

### Create server
PGSpider (Parent node)
<pre>
CREATE SERVER parent FOREIGN DATA WRAPPER pgspider_core_fdw OPTIONS (host '127.0.0.1', port '4813');
</pre>

PostgreSQL, SQLite (Child node)  
In this example, child PostgreSQL node is localhost and port is 5432.  
SQLite node's database is /tmp/temp.db.
<pre>
CREATE SERVER postgres_svr FOREIGN DATA WRAPPER postgres_fdw OPTIONS(host '127.0.0.1', port '5432', dbname 'postgres');
CREATE SERVER sqlite_svr FOREIGN DATA WRAPPER sqlite_fdw OPTIONS(database '/tmp/temp.db');
</pre>

### Create user mapping
PGSpider (Parent node)

Create user mapping for PGSpider. User and password are for current psql user.
<pre>
CREATE USER MAPPING FOR CURRENT_USER SERVER parent OPTIONS(user 'user', password 'pass');
</pre>

PostgreSQL (Child node)
<pre>
CREATE USER MAPPING FOR CURRENT_USER SERVER postgres_svr OPTIONS(user 'user', password 'pass');
</pre>
SQLite (Child node)  
No need to create user mapping.

### Create Multi-Tenant table
PGSpider (Parent node)  
You need to declare a column named "__spd_url" on parent table.  
This column is node location in PGSpider. It allows you to know where the data is comming from node.  
In this example, we define 't1' table to get data from PostgreSQL node and SQLite node.
<pre>
CREATE FOREIGN TABLE t1(i int, t text, __spd_url text) SERVER parent;
</pre>

When expanding Multi-Tenant table to data source tables, PGSpider searches child node tables by name having [Multi-Tenant table name]__[data source name]__0.  

PostgreSQL, SQLite (Child node)
<pre>
CREATE FOREIGN TABLE t1__postgres_svr__0(i int, t text) SERVER postgres_svr OPTIONS (table_name 't1');
CREATE FOREIGN TABLE t1__sqlite_svr__0(i int, t text) SERVER sqlite_svr OPTIONS (table 't1');
</pre>

### Access Multi-Tenant table
<pre>
SELECT * FROM t1;
  i |  t  | __spd_url 
----+-----+----------------
  1 | aaa | /sqlite_svr/
  2 | bbb | /sqlite_svr/
 10 | a   | /postgres_svr/
 11 | b   | /postgres_svr/
(4 rows)
</pre>

### Access Multi-Tenant table using node filter
You can choose getting node with 'IN' clause after FROM items (Table name).

<pre>
SELECT * FROM t1 IN ('/postgres_svr/');
  i | t | __spd_url 
----+---+----------------
 10 | a | /postgres_svr/
 11 | b | /postgres_svr/
(2 rows)
</pre>

### Modify Multi-Tenant table
<pre>
SELECT * FROM t1;
  i |  t  | __spd_url 
----+-----+----------------
  1 | aaa | /sqlite_svr/
 11 | b   | /postgres_svr/
(2 rows)

INSERT INTO t1 VALUES (4, 'c');
INSERT 0 1

SELECT * FROM t1;
  i |  t  | __spd_url 
----+-----+----------------
  1 | aaa | /sqlite_svr/
  4 | c   | /sqlite_svr/
 11 | b   | /postgres_svr/
(3 rows)

UPDATE t1 SET i = 5;
UPDATE 3

SELECT * FROM t1;
 i |  t  | __spd_url 
---+-----+----------------
 5 | aaa | /sqlite_svr/
 5 | c   | /sqlite_svr/
 5 | b   | /postgres_svr/
(3 rows)

DELETE FROM t1;
DELETE 3

SELECT * FROM t1;
 i | t | __spd_url
---+---+-----------
(0 rows)
</pre>

### Modify Multi-Tenant table using node filter
You can choose modifying node with 'IN' clause after table name.

<pre>
SELECT * FROM t1;
  i |  t  | __spd_url 
----+-----+----------------
  1 | aaa | /sqlite_svr/
 11 | b   | /postgres_svr/
(2 rows)

INSERT INTO t1 IN ('/postgres_svr/') VALUES (4, 'c');

SELECT * FROM t1;
  i |  t  | __spd_url 
----+-----+----------------
  1 | aaa | /sqlite_svr/
  4 | c   | /postgres_svr/
 11 | b   | /postgres_svr/
(3 rows)

UPDATE t1 IN ('/postgres_svr/') SET i = 5;
UPDATE 1

SELECT * FROM t1;
 i |  t  | __spd_url 
---+-----+----------------
 1 | aaa | /sqlite_svr/
 5 | c   | /postgres_svr/
 5 | b   | /postgres_svr/
(3 rows)

DELETE FROM t1 IN ('/sqlite_svr/');
DELETE 1

SELECT * FROM t1;
 i | t | __spd_url 
---+---+----------------
 5 | c | /postgres_svr/
 5 | b | /postgres_svr/
(2 rows)
</pre>

## Tree Structure
PGSpider can get data from child PGSpider, it means PGSpider can create tree structure.  
For example, we will create a new PGSpider as root node which connects to PGSpider of previous example.  
The new root node is parent of previous PGSpider node.

### Start new root PGSpider
Create new database cluster with initdb and change port number.  
After that, start and connect to new root node.

### Load extension
PGSpider (new root node)  
If child node is PGSpider, PGSpider use pgspider_fdw.

<pre>
CREATE EXTENSION pgspider_core_fdw;
CREATE EXTENSION pgspider_fdw;
</pre>

### Create server
PGSpider (new root node)
<pre>
CREATE SERVER new_root FOREIGN DATA WRAPPER pgspider_core_fdw OPTIONS (host '127.0.0.1', port '54813') ;
</pre>

PGSpider (Parent node)
<pre>
CREATE SERVER parent FOREIGN DATA WRAPPER pgspider_svr OPTIONS
(host '127.0.0.1', port '4813') ;
</pre>

### Create user mapping
PGSpider (new root node)
<pre>
CREATE USER MAPPING FOR CURRENT_USER SERVER new_root OPTIONS(user 'user', password 'pass');
</pre>

PGSpider (Parent node)
<pre>
CREATE USER MAPPING FOR CURRENT_USER SERVER parent OPTIONS(user 'user', password 'pass');
</pre>

### Create Multi-Tenant table
PGSpider (new root node)  
<pre>
CREATE FOREIGN TABLE t1(i int, t text, __spd_url text) SERVER new_root;
</pre>

PGSpider (Parent node)  
<pre>
CREATE FOREIGN TABLE t1__parent__0(i int, t text, __spd_url text) SERVER parent;
</pre>

### Access Multi-Tenant table

<pre>
SELECT * FROM t1;

  i |  t  |      __spd_url 
----+-----+-----------------------
  1 | aaa | /parent/sqlite_svr/
  2 | bbb | /parent/sqlite_svr/
 10 | a   | /parent/postgres_svr/
 11 | b   | /parent/postgres_svr/
(4 rows)
</pre>

### Create/Drop datasource table
According to the information of a foreign table, you can create/drop a table on remote database.   
  - The query syntax:
    <pre>
    CREATE DATASOURCE TABLE [ IF NOT EXISTS ] table_name;
    DROP DATASOURCE TABLE [ IF EXISTS ] table_name;
    </pre>
  - Parameters:
    - IF NOT EXISTS (in CREATE DATASOURCE TABLE)   
      Do not throw any error if a relation/table with the same name with datasource table already exists in remote server. Note that there is no guarantee that the existing datasouce table is anything like the one that would have been created.
    - IF EXISTS (in DROP DATASOURCE TABLE)   
      Do not throw any error if the datasource table does not exist.
    - table_name   
      The name (optionally schema-qualified) of the foreign table that we can derive the datasource table need to be created.

  - Examples:
    ```sql
    CREATE FOREIGN TABLE ft1(i int, t text) SERVER postgres_svr OPTIONS (table_name 't1');
    CREATE DATASOURCE TABLE ft1; -- new datasource table `t1` is created in remote server
    DROP DATASOURCE TABLE ft1 -- datasource table `t1` is dropped in remote server
    ```

### Migrate table
You can migrate data from source tables to destination tables.   
Source table can be local table, foreign table or multi-tenant table. Destination table can be foreign table or multi-tenant table.

  - The query syntax:
    <pre>
    MIGRATE TABLE source_table
    [REPLACE|TO dest_table OPTIONS (USE_MULTITENANT_SERVER <multitenant_server_name>)]
    SERVER [dest_server OPTIONS ( option 'value' [, ...] ), dest_server OPTIONS ( option 'value' [, ...] ),...]
    </pre>
  - Parameters:
    - source_table   
      The name (optionally schema-qualified) of the source table. Source table can be local table, foreign table or multi-tenant table.
    - REPLACE (optional)   
      If this option is specified, destination table must not be specified, source table will be replaced by a new foreign table/multi-tenant table (with the name sane as source table) remoting to a new data source table. It means source table no longer exists.
    - TO (optional)   
      If it is specified, destination table must be specified. And the name of destination table must be different from the name of source table. After migration, source table is kept, new destination foreign table will be created to remoting to new data source table.   
      - dest_table   
        The name (optionally schema-qualified) of destination table. If destination table already exists, an error will be reported.
        Destination table can be specified with option `USE_MULTITENANT_SERVER`, a multi-tenant destination table will be created same as the destination table.
    - dest_server   
      Foreign server of destination server. If there are many destination servers or there is a signle destination server with `USE_MULTITENANT_SERVER` option is specified, a multi-tenant destination table will be created same as the destination table name.
      - OPTIONS ( option 'value' [, ...] )   
        destination server options, foreign table will be created with these options and datasource table will be created in remote server based on these options.

  - Examples:
    ```sql
    MIGRATE TABLE t1 SERVER postgres_svr;

    MIGRATE TABLE t1 REPLACE SERVER postgres_svr;

    MIGRATE TABLE t1 REPLACE SERVER postgres_svr, postgres_svr;

    MIGRATE TABLE t1 TO t2 SERVER postgres_svr;

    MIGRATE TABLE t1 TO t2 SERVER postgres_svr, postgres_svr;

    MIGRATE TABLE t1 to t2 OPTIONS (USE_MULTITENANT_SERVER 'pgspider_core_svr') SERVER postgres_svr;
    ```

#### Data compression transfer
You can migrate data from source tables to destination tables of other datasources via a cloud function.  
A pgspider_fdw server is required to act as a relay server to transmit data to cloud function.
It is required to provide`endpoint` and `relay` options to active this feature.  

#### Current supported datasources:  
- **PostgreSQL**  
- **MySQL**  
- **Oracle**  
- **GridDB**  
- **PGSpider**  
- **InfluxDB (only support migrating to InfluxDB v2.0)**
- **ObjStorage (only support migrating to AmazonS3 and parquet file)**
#### Options supported:  
- **relay** as *string*, required 
      Specifies foreign server of PGSpider FDW which is used to support Data Compression Transfer Feature.  
- **endpoint** as *string*, required  
      Specifies the endpoint address of cloud service.  
- **socket_port** as *interger*, required, default `4814`  
      Specifies the port number of Socket server  
- **function_timeout**  as *interger*, required, default `900` seconds  
      A socket is opened in FDW to get connection from Function, send data and receive finished notification.  
      If a finished notification of Function does not arrive before timeout expires, or no client connects to the server socket  
      the server socket is closed, and an error is shown.
- **batch_size**  as *interger*, optional, default `1000`  
      batch_size determines the number of records that will be sent to Function each time.
- **proxy** as *string*, optional  
      Proxy for cURL request.  
      If value is set by 'no', disable the use of proxy.  
      If value is not set, cURL uses environmental variables.  
- **org** as *string*, optional  
      The organization name of data store of InfluxDB server v2.0.  
      This option is only used when migrating to InfluxDB server v2.0.  
      If migrating to InfluxDB server v2.0 without org option, error will be raise.  
- **public_host** as *string*, optional  
      The hostname or endpoint of host server.  
      This option is use only in Data Compression Transfer Feature.  
      If PGSPider is behide the NAT, specify host help relay to known host IP address.  
      **public_host** is conflict with **ifconfig_service**, specify both options will raise error.  
- **public_port** as *interger*, optional, default equal to **socket_port**  
      The public port of PGSpider.  
      This option is use only in Data Compression Transfer Feature.  
      If PGSPider is behide the NAT, specify forward port help connection through NAT.  
- **ifconfig_service** as *string*, optional  
      The public service to lookup host ip (Example: ifconfig.me, ifconfig.co).  
      This option is use only in Data Compression Transfer Feature.  
      If PGSPider is behide the NAT, server can request external service to get host IP.  
      **ifconfig_service** is conflict with **public_host**, specify both options will raise error.

Examples:
  ```sql
  -- Create SERVER
  CREATE SERVER cloudfunc FOREIGN DATA WRAPPER pgspider_fdw OPTIONS (endpoint 'http://cloud.example.com:8080', proxy 'no', batch_size '1000');

  CREATE SERVER postgres FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'postgres.example.com', port '5432', dbname 'test');
  CREATE SERVER pgspider FOREIGN DATA WRAPPER pgspider_fdw OPTIONS (host 'pgspider.example.com', port '4813', dbname 'test');
  CREATE SERVER mysql FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host 'mysql.example.com', port '3306');
  CREATE SERVER griddb FOREIGN DATA WRAPPER griddb_fdw OPTIONS (host 'griddb.example.com', port '20002', clustername 'GridDB');
  CREATE SERVER oracle FOREIGN DATA WRAPPER oracle_fdw OPTIONS (dbserver 'oracle.example.com:1521/XE');
  CREATE SERVER influx FOREIGN DATA WRAPPER influxdb_fdw OPTIONS (host 'influxdb.example.com', port '38086', dbname 'test', version '2');
  CREATE SERVER objstorage_with_endpoint FOREIGN DATA WRAPPER objstorage_fdw OPTIONS (endpoint 'http://cloud.example.com:9000', storage_type 's3');
  CREATE SERVER objstorage_with_region FOREIGN DATA WRAPPER objstorage_fdw OPTIONS (region 'us-west-1', storage_type 's3');

  -- MIGRATE NONE
  MIGRATE TABLE ft1 OPTIONS (socket_port '4814', function_timeout '800') SERVER 
          postgres OPTIONS (table_name 'table', relay 'cloudfunc'),
          pgspider OPTIONS (table_name 'table', relay 'cloudfunc'), 
          mysql OPTIONS (dbname 'test', table_name 'table', relay 'cloudfunc'),
          griddb OPTIONS (table_name 'table', relay 'cloudfunc'),
          oracle OPTIONS (table 'table', relay 'cloudfunc'),
          influx OPTIONS (table 'table', relay 'cloudfunc', org 'myorg'),
          objstorage_with_endpoint OPTION (filename 'bucket/file1.parquet', format 'parquet'),
          objstorage_with_endpoint OPTION (dirname 'bucket', format 'parquet');

  -- MIGRATE TO
  MIGRATE TABLE ft1 TO ft2 OPTIONS (socket_port '4814', function_timeout '800') SERVER 
          postgres OPTIONS (table_name 'table', relay 'cloudfunc'),
          pgspider OPTIONS (table_name 'table', relay 'cloudfunc'), 
          mysql OPTIONS (dbname 'test', table_name 'table', relay 'cloudfunc'),
          griddb OPTIONS (table_name 'table', relay 'cloudfunc'),
          oracle OPTIONS (table 'table', relay 'cloudfunc'),
          influx OPTIONS (table 'table', relay 'cloudfunc', org 'myorg'),
          objstorage_with_region OPTION (filename 'bucket/file1.parquet', format 'parquet'),
          objstorage_with_region OPTION (dirname 'bucket', format 'parquet');

  -- MIGRATE REPLACE
  MIGRATE TABLE ft1 REPLACE OPTIONS (socket_port '4814', function_timeout '800') SERVER 
          postgres OPTIONS (table_name 'table', relay 'cloudfunc'),
          pgspider OPTIONS (table_name 'table', relay 'cloudfunc'), 
          mysql OPTIONS (dbname 'test', table_name 'table', relay 'cloudfunc'),
          griddb OPTIONS (table_name 'table', relay 'cloudfunc'),
          oracle OPTIONS (table 'table', relay 'cloudfunc'),
          influx OPTIONS (table 'table', relay 'cloudfunc', org 'myorg');
          objstorage_with_endpoint OPTION (filename 'bucket/file1.parquet', format 'parquet'),
          objstorage_with_region OPTION (dirname 'bucket', format 'parquet');
  ```

## Note
When a query to foreign tables fails, you can find why it fails by seeing a query executed in PGSpider with `EXPLAIN (VERBOSE)`.  
PGSpider has a table option: `disable_transaction_feature_check`:  
- When disable_transaction_feature_check is false:  
  All child nodes will be checked. If there is any child node that does not support transaction, an error will be raised, and the modification will be stopped.
- When disable_transaction_feature_check is true:  
  The modification can be proceeded without checking.

## Limitation
Limitation with modification and transaction:
- Sometimes, PGSpider cannot read modified data in a transaction.
- It is recommended to execute a modify query(INSERT/UPDATE/DELETE) in auto-commit mode. If not, a warning "Modification query is executing in non-autocommit mode. PGSpider might get inconsistent data." is shown.
- RETURNING, WITH CHECK OPTION and ON CONFLICT are not supported with Modification.
- COPY and modify (INSERT/UPDATE/DELETE) foreign partition are not supported.

## Contributing
Opening issues and pull requests are welcome.

## License
Portions Copyright (c) 2018, TOSHIBA CORPORATION

Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.

See the [`LICENSE`][1] file for full details.

[1]: LICENSE

griddb_fdw's People

Contributors

aanhh avatar hrkuma avatar jopoly avatar kanegoon avatar khieuvm avatar lamduongngoc avatar mkgrgis avatar mochizk avatar redsiren204 avatar t-kataym avatar thongpvn avatar tunghdt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

griddb_fdw's Issues

Set text column to null value is error

The server is terminated connection because of setting null value to text column.

-- join with nullable side with some columns with null values
UPDATE ft5 SET c3 = null where c1 % 9 = 0;
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
connection to server was lost

The schema of table ft5 is below:

CREATE FOREIGN TABLE ft5 (
	c1 int OPTIONS (rowkey 'true'),
	c2 int NOT NULL,
	c3 text
) SERVER griddb_svr OPTIONS (table_name 'T4');

If it supported NULL value, please fix this issue.

Error when using sub-query with multi instances of table

I have a scenario as below:

-- Prepare

-- In GridDB
set_tableInfo(store, "INT8_TBL", &INT8_TBL,
                  3,
                  "id", GS_TYPE_INTEGER, GS_TYPE_OPTION_NOT_NULL,
                  "q1", GS_TYPE_LONG, GS_TYPE_OPTION_NULLABLE,
                  "q2", GS_TYPE_LONG, GS_TYPE_OPTION_NULLABLE);

-- In GridDB FDW
CREATE FOREIGN TABLE int8_tbl(id int4 OPTIONS (rowkey 'true'), q1 int8, q2 int8) SERVER griddb_svr;

INSERT ...

select * from int8_tbl;
 id |        q1        |        q2         
----+------------------+-------------------
  1 |              123 |               456
  2 |              123 |  4567890123456789
  3 | 4567890123456789 |               123
  4 | 4567890123456789 |  4567890123456789
  5 | 4567890123456789 | -4567890123456789
(5 rows)

-- Scenario 1

  • Scenario
-- In GridDB FDW
explain (verbose, costs off)
select * from
  int8_tbl a left join
  lateral (select *, a.q2 as x from int8_tbl b) ss on a.q2 = ss.q1;           <-- a, b are 2 instances of int8_tbl 
                     QUERY PLAN                     
----------------------------------------------------
 Merge Right Join
   Output: a.id, a.q1, a.q2, b.id, b.q1, b.q2, a.q2
   Merge Cond: (b.q1 = a.q2)
   ->  Sort
         Output: b.id, b.q1, b.q2, a.q2
         Sort Key: b.q1
         ->  Foreign Scan on public.int8_tbl b
               Output: b.id, b.q1, b.q2, a.q2                          <-- a.q2 is not defined
               Remote SQL: SELECT  *  FROM int8_tbl
   ->  Sort
         Output: a.id, a.q1, a.q2
         Sort Key: a.q2
         ->  Foreign Scan on public.int8_tbl a
               Output: a.id, a.q1, a.q2
               Remote SQL: SELECT  *  FROM int8_tbl
(15 rows)

select * from
  int8_tbl a left join
  lateral (select *, a.q2 as x from int8_tbl b) ss on a.q2 = ss.q1;

                                                                  Error:   b.q2     and         a.q2    have same values
                                                                            |                    |
                                                                            V                    V

 id |        q1        |        q2         | id |        q1        |        q2         |         x         
----+------------------+-------------------+----+------------------+-------------------+-------------------
  5 | 4567890123456789 | -4567890123456789 |    |                  |                   |                  
  3 | 4567890123456789 |               123 |  1 |              123 |               456 |               456
  3 | 4567890123456789 |               123 |  2 |              123 |  4567890123456789 |  4567890123456789
  1 |              123 |               456 |    |                  |                   |                  
  2 |              123 |  4567890123456789 |  3 | 4567890123456789 |               123 |               123
  4 | 4567890123456789 |  4567890123456789 |  3 | 4567890123456789 |               123 |               123
  2 |              123 |  4567890123456789 |  4 | 4567890123456789 |  4567890123456789 |  4567890123456789
  4 | 4567890123456789 |  4567890123456789 |  4 | 4567890123456789 |  4567890123456789 |  4567890123456789
  2 |              123 |  4567890123456789 |  5 | 4567890123456789 | -4567890123456789 | -4567890123456789
  4 | 4567890123456789 |  4567890123456789 |  5 | 4567890123456789 | -4567890123456789 | -4567890123456789
(10 rows)
  • Expected result:
select * from
  int8_tbl a left join
  lateral (select *, a.q2 as x from int8_tbl b) ss on a.q2 = ss.q1;

 id |        q1        |        q2         | id |        q1        |        q2         |         x         
----+------------------+-------------------+----+------------------+-------------------+-------------------
  5 | 4567890123456789 | -4567890123456789 |    |                  |                   |                  
  3 | 4567890123456789 |               123 |  1 |              123 |               456 |               123
  3 | 4567890123456789 |               123 |  2 |              123 |  4567890123456789 |               123
  1 |              123 |               456 |    |                  |                   |                  
  2 |              123 |  4567890123456789 |  3 | 4567890123456789 |               123 |  4567890123456789
  4 | 4567890123456789 |  4567890123456789 |  3 | 4567890123456789 |               123 |  4567890123456789
  2 |              123 |  4567890123456789 |  4 | 4567890123456789 |  4567890123456789 |  4567890123456789
  4 | 4567890123456789 |  4567890123456789 |  4 | 4567890123456789 |  4567890123456789 |  4567890123456789
  2 |              123 |  4567890123456789 |  5 | 4567890123456789 | -4567890123456789 |  4567890123456789
  4 | 4567890123456789 |  4567890123456789 |  5 | 4567890123456789 | -4567890123456789 |  4567890123456789
(10 rows)

-- Scenario 2

  • Scenario
-- In GridDB FDW
-- lateral reference in a PlaceHolderVar evaluated at join level
explain (verbose, costs off)
select * from
  int8_tbl a left join lateral
  (select b.q1 as bq1, c.q1 as cq1, least(a.q1,b.q1,c.q1) from               <-- a.q1 not found
   int8_tbl b cross join int8_tbl c) ss
  on a.q2 = ss.bq1;
ERROR:  variable not found in subplan target lists
select * from
  int8_tbl a left join lateral
  (select b.q1 as bq1, c.q1 as cq1, least(a.q1,b.q1,c.q1) from
   int8_tbl b cross join int8_tbl c) ss
  on a.q2 = ss.bq1;
ERROR:  variable not found in subplan target lists
  • If I remove a.q1 from this query, the command can run successfully.

I think that there is something wrong with query plan.

  • In Scenario 1, when a.q2 is not found, a.q2 is converted to b.q2 (a and b are 2 instances of int8_tbl).
  • In Scenario 2, when a.q1 is not found, the error is returned.

I think we can modify query plan to get data from int8_tbl only once time for all instances (a, b and c).
Could you please help me to resolve this issue?

Occur PANIC by SAVEPOINT

I got this error regarding to SAVEPOINT command.

BEGIN;
DECLARE c CURSOR FOR SELECT * FROM ft1 ORDER BY c1;
FETCH c;
 c1 | c2 |  c3   |            c4            |            c5            | c6 | c7 | c8  
----+----+-------+--------------------------+--------------------------+----+----+-----
  1 |  1 | 00001 | Fri Jan 02 00:00:00 1970 | Fri Jan 02 00:00:00 1970 | 1  | 1  | foo
(1 row)

SAVEPOINT s;
WARNING:  AbortSubTransaction while in ABORT state
WARNING:  AbortSubTransaction while in ABORT state
ERROR:  Subtransaction is not supported. It is handled as Main transaction.
ERROR:  Subtransaction is not supported. It is handled as Main transaction.
ERROR:  Subtransaction is not supported. It is handled as Main transaction.
ERROR:  Subtransaction is not supported. It is handled as Main transaction.
PANIC:  ERRORDATA_STACK_SIZE exceeded
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
connection to server was lost

Even if it does not support SAVEPOINT, PANIC error should not occur. Could you please investigate and fix this issue?

Compare 2 strings on GridDB FDW

I have a scenario as below:

-- Prepare
CREATE FOREIGN TABLE onek2 (
  unique1   int4,
  unique2   int4,
  two     int4,
  four    int4,
  ten     int4,
  twenty    int4,
  hundred   int4,
  thousand  int4,
  twothousand int4,
  fivethous int4,
  tenthous  int4,
  odd     int4,
  even    int4,
  stringu1  text,
  stringu2  text,
  string4   text
) SERVER griddb_svr;


-- Scenario
explain (costs off)
select unique2 from onek2 where unique2 = 11 and stringu1 = 'ATAAAA';
      QUERY PLAN       
-----------------------
 Foreign Scan on onek2
(1 row)

select unique2 from onek2 where unique2 = 11 and stringu1 = 'ATAAAA';
 unique2 
---------
      11
(1 row)

explain (costs off)
select * from onek2 where unique2 = 11 and stringu1 < 'B';
      QUERY PLAN       
-----------------------
 Foreign Scan on onek2
(1 row)

select * from onek2 where unique2 = 11 and stringu1 < 'B';
ERROR:  GridDB-API is failed by 150018 at griddb_fdw.c: 2512
  Binary operation is not defined for the types STRING and STRING

explain (costs off)
select unique2 from onek2 where unique2 = 11 and stringu1 < 'B';
      QUERY PLAN       
-----------------------
 Foreign Scan on onek2
(1 row)

select unique2 from onek2 where unique2 = 11 and stringu1 < 'B';
ERROR:  GridDB-API is failed by 150018 at griddb_fdw.c: 2512
  Binary operation is not defined for the types STRING and STRING

I see that:

  • I can use "=" operator to compare 2 strings
  • I can not use "<" and ">" operators to compare 2 strings

Could you please help me to resolve this issue?

Error griddb_fdw when creating foreign table

I can not create the foreign table. The issue occurs below:

-- ===================================================================
-- create foreign tables
-- ===================================================================
CREATE FOREIGN TABLE ft1 (
    c0 int,
    "C 1" int NOT NULL,
    c2 int NOT NULL,
    c3 text,
    c4 timestamptz,
    c5 timestamp,
    c6 varchar(10),
    c7 char(10) default 'ft1',
    c8 text
) SERVER griddb_srv;
ALTER FOREIGN TABLE ft1 DROP COLUMN c0;
ALTER FOREIGN TABLE ft1 OPTIONS (table_name 'T 1');
\det+
                    List of foreign tables
 Schema | Table |  Server  |    FDW options     | Description 
--------+-------+----------+--------------------+-------------
 public | ft1   | loopback | (table_name 'T 1') | 
(1 row)

SELECT * FROM ft1;
ERROR:  No such container: ft1
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
connection to server was lost

I think griddb_fdw can not link table "T 1" to foreign table "ft1".

Could you please fix this issue?

Error griddb_fdw test using "make check" on postgresql 11.0

It cannot complete regression test using "make check" command. The issue occurs below:

SELECT * FROM department d, employee e WHERE d.department_id = e.emp_dept_id LIMIT 10;                                                                   
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
connection to server was lost

The sql querry causes the error is SELECT * FROM department d, employee e WHERE d.department_id = e.emp_dept_id LIMIT 10;
Following the log of the PostgreSQL server, I found out the error in griddb_fdw.c, line 857:

TRAP: FailedAssertion("!(fsstate->cursor == fsstate->num_tuples)", File: "griddb_fdw.c", Line: 857)
2018-12-26 15:45:56.052 +07 [5763] LOG:  server process (PID 5781) was terminated by signal 6: Aborted
2018-12-26 15:45:56.052 +07 [5763] DETAIL:  Failed process was running: SELECT * FROM department d, employee e WHERE d.department_id = e.emp_dept_id LIMIT 10;

The log of PostgreSQL server is attached: postmaster.log

Hash join code support issue

I have a scenario as below:

-- Prepare

set local min_parallel_table_scan_size = 0;
set local parallel_setup_cost = 0;
-- Extract bucket and batch counts from an explain analyze plan.  In
-- general we can't make assertions about how many batches (or
-- buckets) will be required because it can vary, but we can in some
-- special cases and we can check for growth.
create or replace function find_hash(node json)
returns json language plpgsql
as
$$
declare
  x json;
  child json;
begin
  if node->>'Node Type' = 'Hash' then
    return node;
  else
    for child in select json_array_elements(node->'Plans')
    loop
      x := find_hash(child);
      if x is not null then
        return x;
      end if;
    end loop;
    return null;
  end if;
end;
$$;
create or replace function hash_join_batches(query text)
returns table (original int, final int) language plpgsql
as
$$
declare
  whole_plan json;
  hash_node json;
begin
  for whole_plan in
    execute 'explain (analyze, format ''json'') ' || query
  loop
    hash_node := find_hash(json_extract_path(whole_plan, '0', 'Plan'));
    original := hash_node->>'Original Hash Batches';
    final := hash_node->>'Hash Batches';
    return next;
  end loop;
end;
$$;

-- Test with local table

create table simple as
  select generate_series(1, 20000) AS id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
select original > 1 as initially_multibatch, final > original as increased_batches
  from hash_join_batches(
$$
  select count(*) from simple r join simple s using (id);
$$);
 initially_multibatch | increased_batches 
----------------------+-------------------
 f                    | f
(1 row)

-- Test with foreign table

-- In GridDB
set_tableInfo(store, "simple", &simple,
                  2,
                  "id", GS_TYPE_INTEGER, GS_TYPE_OPTION_NOT_NULL,
                  "t", GS_TYPE_STRING, GS_TYPE_OPTION_NULLABLE);

-- In GridDB FDW
create foreign table simple (id int options (rowkey 'true'), t text) server griddb_svr;
insert into simple select generate_series(1, 20000) AS id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
select original > 1 as initially_multibatch, final > original as increased_batches
  from hash_join_batches(
$$
  select count(*) from simple r join simple s using (id);
$$);
 initially_multibatch | increased_batches 
----------------------+-------------------
                      | 
(1 row)

I found that GridDB FDW does not operate correctly.
Could you please help me to fix it?

UPDATE with inherit target and an inherited source table is failure

This error occurs when I execute UPDATE with inherit target and inherited source table.
The scenario is as following:

create table foo (f1 int, f2 int);
create foreign table foo2 (f3 int) inherits (foo)
  server griddb_svr options (table_name 'loct1');
create table bar (f1 int, f2 int);
create foreign table bar2 (f3 int) inherits (bar)
  server griddb_svr options (table_name 'loct2');
alter table foo set (autovacuum_enabled = 'false');
alter table bar set (autovacuum_enabled = 'false');
alter foreign table foo2 alter column f1 options (rowkey 'true');
alter foreign table bar2 alter column f1 options (rowkey 'true');
insert into foo values(1,1);
insert into foo values(3,3);
insert into foo2 values(2,2,2);
insert into foo2 values(4,4,4);
insert into bar values(1,11);
insert into bar values(2,22);
insert into bar values(6,66);
insert into bar2 values(3,33,33);
insert into bar2 values(4,44,44);
insert into bar2 values(7,77,77);
update bar set f2 = f2 + 100
from
  ( select f1 from foo union all select f1+3 from foo ) ss
where bar.f1 = ss.f1;
WARNING:  Fetched rowkey is not same as expected
ERROR:  GridDB-API is failed by 140037 at griddb_fdw.c: 2754

Maybe the cursor pointing target is wrong. I investigate the griddb_fdw.c and doubt that.
Could you reproduce this issue and fix this.

COPY FROM causes the crash error

COPY FROM does not work now. And it causes the crash error:

create foreign table rem2 (f1 int, f2 text) server griddb_svr options(table_name 'loc2');
copy rem2 from stdin;
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
select * from rem2;
no connection to the server
connection to server was lost

Could you please confirm this function is supported or not?
And I think the crash error should not occur.

Inconsistent about the behavior of SELECT target and WHERE condition on system columns

I tried to SELECT some systems column, tableoid and ctid. The behavior is inconsistent.
These values can be gotten by SELECT command:

SELECT tableoid::regclass, * FROM ft1 t1 LIMIT 1;
 tableoid | c1 | c2 |  c3   |            c4            |            c5            | c6 | c7 | c8  
----------+----+----+-------+--------------------------+--------------------------+----+----+-----
 ft1      |  1 |  1 | 00001 | Fri Jan 02 00:00:00 1970 | Fri Jan 02 00:00:00 1970 | 1  | 1  | foo
(1 row)

, but not by SELECT with WHERE condition:

EXPLAIN (VERBOSE, COSTS OFF)
SELECT * FROM ft1 t1 WHERE t1.tableoid = 'pg_class'::regclass LIMIT 1;
                             QUERY PLAN                             
--------------------------------------------------------------------
 Limit
   Output: c1, c2, c3, c4, c5, c6, c7, c8
   ->  Foreign Scan on public.ft1 t1
         Output: c1, c2, c3, c4, c5, c6, c7, c8
         Remote SQL: SELECT  *  FROM "T1" WHERE ((tableoid = 1259))
(5 rows)

SELECT * FROM ft1 t1 WHERE t1.tableoid = 'ft1'::regclass LIMIT 1;
ERROR:  GridDB-API is failed by 150012 at griddb_fdw.c: 2276
  No such column tableoid

GridDB FDW tries to push-down system column on the WHERE condition. It seems to be necessary to fix GridDB FDW not to push-down WHERE condition about system column, I think.

The partitioning feature cause the crash error

Regarding to table partitioning feature, I got a crash error by the following scenario:

create table itrtest (a int, b text) partition by list (a);
create foreign table remp1 (a int, b text) server griddb_svr options (table_name 'locp1');
alter table itrtest attach partition remp1 for values in (1);
insert into itrtest values (1, 'foo');
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
connection to server was lost

Does GridDB FDW support the table partitioning feature?
Anyway, the server crashing should not occur. Could you please fix this issue.

Request to improve aggregation push-down

GridDB has same SQL aggregation functions to PostgreSQL. Such as SUM(), COUNT() and so on. But I can not push down SQL aggregation functions.
Could you please improve aggregation push-down?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.