Giter Site home page Giter Site logo

postgres-x2 / postgres-x2 Goto Github PK

View Code? Open in Web Editor NEW
297.0 297.0 88.0 163.41 MB

Master Postgres-XC source repository

License: Other

Makefile 0.88% Shell 1.12% SQLPL 0.57% C 86.99% PLpgSQL 4.37% PLSQL 0.28% C++ 2.59% Perl 0.82% XSLT 0.04% CSS 0.01% Assembly 0.01% DTrace 0.01% XS 0.03% Python 0.02% Groff 0.10% Batchfile 0.02% Yacc 1.48% Lex 0.50% M4 0.17% Csound Score 0.01%

postgres-x2's People

Contributors

adunstan avatar alvherre avatar amitdkhan avatar ashutosh-bapat avatar bmomjian avatar darcyjcain avatar dbdbdb avatar feodor avatar gabbasb avatar galylee avatar grunthos avatar gsstark avatar hlinnaka avatar itgacky avatar jaiminpan avatar jconway avatar kgrittn avatar koichi-szk avatar masaofujii avatar mhagander avatar nmisch avatar pavanvd avatar petere avatar robertmhaas avatar scrappy avatar simonat2ndquadrant avatar tatsuo-ishii avatar tglsfdc avatar vadim4o avatar xuegang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

postgres-x2's Issues

Use GetRelationNodes for CMD_INSERT in create_remotedml_plan

This was submitted by Abbas Butt, 2013-02-22.

See sourceforge bug tracker for details.

Currently the function create_remotedml_plan special cases CMD_INSERT while planning and does not use GetRelationNodes. There should be a central place in code to provide exec_nodes.

If we do not special case CMD_INSERT we get the following diffs in insets.sql and plpgsql.sql

--- src/test/regress/results/inherit.out 2013-02-22 11:54:32.968582044 +0500
+++ /home/edb/Desktop/QS/latest_code/patch_changed/regress/results/inherit.out 2013-02-22 11:04:48.000000000 +0500
@@ -1254,10 +1254,7 @@
select * from patest0 join (select f1 from int4_tbl limit 1) ss on id = f1;
id | x | f1
----+---+----
- 0 | 0 | 0
- 0 | 0 | 0
- 0 | 0 | 0
-(3 rows)
+(0 rows)
drop index patest2i;
explain (costs off, num_nodes off, nodes off)
--- src/test/regress/results/plpgsql.out 2013-02-22 11:56:11.701418954 +0500
+++ /home/edb/Desktop/QS/latest_code/patch_changed/regress/results/plpgsql.out 2013-02-22 11:06:27.000000000 +0500
@@ -2161,7 +2161,11 @@
select trap_foreign_key(2); -- detects FK violation
NOTICE: caught foreign_key_violation
-ERROR: failed to send COMMIT command to node
+ trap_foreign_key
+------------------
+ 0
+(1 row)
+
begin;
set constraints all deferred;
select trap_foreign_key(2); -- should not detect FK violation

Leaking connections

This was submitted by knienart, 2013-02-14.

See sourceforge bug tracker for details.

After number of connects/transactions/disconnects using JDBC, coordinator stacks up the queries and non-existing connections in the pg_stat_activity like

..
application_name | pgxc
client_addr | ::1
client_hostname |
client_port | 52413
backend_start | 2013-02-14 13:16:28.183573+01
xact_start |
query_start | 2013-02-14 13:20:41.063212+01
state_change | 2013-02-14 13:20:41.06787+01
waiting | f
state | idle
query | SET SESSION AUTHORIZATION DEFAULT;RESET ALL;

the query is cleary executed from agent_reset_session(PoolAgent *agent) in poolmgr.c.
Transactions are succesfult completed but some connections are never returned to the pool it seems, so after # of cycles system refuses new connections.
The tests are too complicated to post here, hint could be that SET SCHEMA is called for each of the nerw connections - remember there was some bug related to SET.

TRANSACTION ISOLATION LEVEL SERIALIZABLE does not work.

Original Bug# 264, by Abbas Butt, 2011-12-20

See sourceforge bug tracker for details.

The test case involved here is portals.sql

The test case is

BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
CREATE TABLE cursor (a int, b int) distribute by hash(b);
INSERT INTO cursor VALUES (1);
DECLARE c1 NO SCROLL CURSOR FOR SELECT * FROM cursor FOR UPDATE;
UPDATE cursor SET a = 2;
FETCH ALL FROM c1;
COMMIT;
Expected output from the FETCH is
a 

---
(0 rows)
Obtained result is 
a | b 
---+---
2 |
(1 row)

CTAS output with psql

This was submitted by Michael Paquier, 2012-04-27.

See sourceforge bug tracker for details.

==============================================
#### Postgres-XC
postgres=# SELECT version();
version
-------------------------------------------------------------------------------------------------------------------------------------------------
Postgres-XC 1.0beta1 on x86_64-unknown-linux-gnu, based on PostgreSQL 9.1beta2, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit
(1 row)
postgres=# CREATE TABLE test(test int);
CREATE TABLE
postgres=# INSERT INTO test VALUES (1);
INSERT 0 1
postgres=# CREATE TABLE test2 as select * from test;
INSERT 0 1
==============================================
#### PostgreSQL
postgres=# SELECT version();
version
---------------------------------------------------------------------------------------------------------------
PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit
(1 row)
postgres=# CREATE TABLE test(test int);
CREATE TABLE
postgres=# INSERT INTO test VALUES (1);
INSERT 0 1
postgres=# CREATE TABLE test2 as select * from test;
SELECT 1

genrate_series and FQS

This was submitted by Koichi Suzuki, 2013-01-22.

See sourceforge bug tracker for details.

FQS seems not to handle generate_series() properly both in REL1_0_STABLE and master (as of Jan.22nd, 2013).

Following is the tests and their results:

REL1_0_STABLE ================================================

PGXC$ Psql
psql (PGXC 1.0.1, based on PG 9.1.7)
Type "help" for help.
koichi=# create table t1 (a int);
CREATE TABLE
koichi=# insert into t1 (a) values (generate_series(1,25));
ERROR: set-valued function called in context that cannot accept a set
koichi=# set enable_fast_query_shipping=false;
SET
koichi=# insert into t1 (a) values (generate_series(1,25));
INSERT 0 25
koichi=#
master (as of Jan.22nd, 2013) ====================================
PGXC$ Psql 
psql (PGXC 1.1devel, based on PG 9.2beta2)
Type "help" for help.
koichi=# create table t1 ( a integer );
CREATE TABLE
koichi=# insert into t1 (a) values ( generate_series(1,25) );
ERROR: set-valued function called in context that cannot accept a set
koichi=# set enable_fast_query_shipping=false;
SET
koichi=# insert into t1 (a) values ( generate_series(1,25) );
INSERT 0 50
koichi=#

Cursor CURRENT OF for replicated tables

Current OF does not look to work for UPDATE and DELETE operations.
For distributed tables

template1=# create table aa (a int, b int);
CREATE TABLE
template1=# insert into aa values (1,1),(1,2),(1,4),(4,8),(2,5);
INSERT 0 4
template1=# begin;
BEGIN
template1=# declare foo cursor for select * from aa_rep for update;
DECLARE CURSOR
template1=# fetch forward from foo;
a | b 
---+---
1 | 1
(1 row)
template1=# fetch forward from foo;
a | b 
---+---
1 | 2
(1 row)
template1=# delete from aa_rep where current of foo;
DELETE 0
template1=# fetch forward from foo;
a | b 
---+---
1 | 4
(1 row)
template1=# delete from aa_rep where current of foo;
DELETE 0
template1=# commit;
COMMIT
template1=# select * from aa;
a | b 
---+---
1 | 1
1 | 2
1 | 4
2 | 5
(4 rows)

It works OK for distributed tables.

XC error in prepare after iteration in JDBC

This was submitted by Koichi Suzuki, 2013-02-12.

See sourceforge bug tracker for details.

When prepare statement is iterated, XC application gets an error message that ERROR: prepared statement "xxxx" already exists.

It is not successful to reproduce the problem from psql so materials are attached to reproduce it from JDBC.

Typical output from the test script is as follows:

------8<--------------------------8<------------
[koichi@linker:test124]$ test124.sh node01 20004 koichi
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.6) (rhel-1.33.1.11.6.el5_9-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
javac 1.6.0_24
JDBC DRIVER = postgresql-9.1-902.jdbc4.jar
DROP DATABASE
CREATE DATABASE
CREATE TABLE
CREATE TABLE
start...[ c3 = 1 ]
loop = 0 ...complete
loop = 1 ...complete
loop = 2 ...complete
loop = 3 ...complete
loop = 4 ...complete
loop = 5 ...complete
loop = 6 ...complete
loop = 7 ...complete
loop = 8 ...complete
loop = 9 ...complete
complete [ c3 = 1 ]
start...[ c3 = 2 ]
loop = 0 ...complete
loop = 1 ...complete
loop = 2 ...complete
loop = 3 ...complete
loop = 4 ...
[ERROR] catch SQLException
org.postgresql.util.PSQLException: ERROR: prepared statement "S_4" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2101)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1834)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:510)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:386)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:332)
at Test124.main(Test124.java:99)
[koichi@linker:test124]$ 
------>8-------------------------->8------------

Multiple GTM standbys -- no error log

This was submitted by Koichi Suzuki, 2013-01-31.

See sourceforge bug tracker for details.

It's not a beta blocker but nice to have in 1.1.

Nikhil's suggestion:
In ProcessPGXCNodeRegister() when another standby makes a connection, we can check if there is an existing standby already present via find_standby_node_info. We can additionally check if this current standby is reachable via PQconnectGTM() as well and only if the current standby seems to be under issues should we allow this new one to be accepted. Otherwise we can reject this new standby.

plpgsql: failed to send COMMIT command to node

This was submitted by Michael Paquier, 2012-05-11.

See sourceforge bug tracker for details.

create temp table master(f1 int primary key);
create temp table slave(f1 int references master deferrable);
insert into master values(1);
insert into slave values(1);
insert into slave values(2); -- fails?
create function trap_foreign_key(int) returns int as $$
begin
begin
insert into slave values($1);
exception
when foreign_key_violation then
raise notice 'caught foreign_key_violation';
return 0;
end;
return 1;
end$$ language plpgsql;
select trap_foreign_key(1);
select trap_foreign_key(2); -- Error here
-- NOTICE: caught foreign_key_violation
-- ERROR: failed to send COMMIT command to node

This happens after catching an exception on a plpsql function.

Strangely it is easy to reproduce with psql but does not appear all the time with regressions.

SQL functions containing utility statements do not work

This was submitted by Abbas Butt, 2013-02-23.

See sourceforge bug tracker for details.

Consider the output of the test case when run in vanilla PG

test=# create table my_tab1 (a int);
CREATE TABLE
test=# insert into my_tab1 values(1);
INSERT 0 1
test=# SET check_function_bodies = false;
SET
test=# create function f1 () returns setof my_tab1 as $$ create table my_tab2 (a int); select * from my_tab1; $$ language sql;
CREATE FUNCTION
test=# select f1();
f1
-----
(1)
(1 row)
test=# SET check_function_bodies = true;
SET
test=# drop function f1();
DROP FUNCTION
test=# drop table my_tab1;
DROP TABLE
test=# drop table my_tab2;
DROP TABLE
The same test case when run in XC produces errors
test=# create table my_tab1 (a int);
CREATE TABLE
test=# insert into my_tab1 values(1);
INSERT 0 1
test=# SET check_function_bodies = false;
SET
test=# create function f1 () returns setof my_tab1 as $$ create table my_tab2 (a int); select * from my_tab1; $$ language sql;
CREATE FUNCTION
test=# select f1();
ERROR: Unexpected response from the Datanodes for 'T' message, current request type 1
CONTEXT: SQL function "f1" statement 1
test=# SET check_function_bodies = true;
SET
test=# drop function f1();
DROP FUNCTION
test=# drop table my_tab1;
DROP TABLE
test=# drop table my_tab2;
ERROR: table "my_tab2" does not exist

Successive UPDATEs using WHERE CURRENT OF does not work

Original Bug# 259, by Abbas Butt, 2013-06-12

See sourceforge bug tracker for details.

Successive UPDATEs using WHERE CURRENT OF or a DELETE followed by an UPDATE does not work because the first UPDATE changes the ctid.
This limitation can be solved using UPDATE/DELETE RETURNING ctid, however RETURNING is not yet supported in XC.
The test case involved here is portals.sql
There are a number of test cases failing in portals.sql because of this limitation

GROUP BY using primary column with other columns in SELECT

This was submitted by Ashutosh Bapat, 2013-02-12.

See sourceforge bug tracker for details.

CREATE TEMP TABLE products (product_id int, name text, price numeric)
CREATE TEMP TABLE sales (product_id int, units int);
ALTER TABLE products ADD PRIMARY KEY (product_id);
SELECT product_id, p.name, (sum(s.units) * p.price) AS sales FROM products p LEFT JOIN sales s USING (product_id) GROUP BY product_id;
ERROR: column "l.a_2" must appear in the GROUP BY clause or be used in an aggregate function

With the following commit

commit e49ae8d
Author: Tom Lane [email protected]
Date: Sat Aug 7 02:44:09 2010 +0000

We allow other columns from the table with primary key in SELECT even when they are not part of the aggregate or GROUP BY clause. (This is fine since having product key in GROUP BY implies that there can be only one value for those columns.). But it seems PG can not decipher the same when the primary column is buried deeper in subqueries.

Handling of processed tuple counting in DMLs

This was submitted by Abbas Butt, 2013-01-21.

See sourceforge bug tracker for details.

Consider the case of a non FQSed INSERT for example.

The executor keeps track of # of tuples processed in es_processed member of EState structure. When a non-FQSed INSERT completes this member is increased once due to

estate->es_processed += row count

in HandleCommandComplete and once due to

(estate->es_processed)++

in ExecInsert.
The result is that although only one row is inserted we get message as if two rows got inserted INSERT 0 2

Now consider the same INSERT case when it is FQSed. In this case the # of tuples processed is increased just once in HandleCommandComplete since ExecInsert is never called in this case and hence we get correct output i.e. INSERT 0 1
To handle this error in processed tuple counting we are currently using a variable non_fqs_dml which indicates whether a DML is FQSed or not.

We can improve this handling by using this mechanism.

HandleCommandComplete should set the number of processed rows in some member of RemoteQueryState. At the time of executing the RemoteQueryState, it can decide whether to pick up that member and stuff it in es_processed or just use es_processed.

GTM_Standby crash

This was submitted by Koichi Suzuki, 2012-07-14.

See sourceforge bug tracker for details.

GTM standby crashes in the normal operation. Cluster consists of two gtm proxies, two datanodes and two coordinators. The operation is as follows:

  1. Initialize each coordinator/datanode with initidb,
  2. Edit configuration files (gtm, gtm_standby, gtm_proxy, coordinators, datanodes) manually (in fact, using dedicated script)
  3. Start all the gtms
  4. Create NODE, create database
  5. sql -c 'select 1'

The backtrace looks like:

(gdb) bt
#0 0x0000000000409645 in GTM_BkupBeginTransactionGetGXIDMulti (
coord_name=0x434f41 "", txn=0x7fcd4e914bc0, gxid=0x7fcd4e910bc0, 
isolevel=0x7fcd4e90cbc0, readonly=0x7fcd4e91abd0 "", 
connid=0x7fcd4e918bc0, txn_count=2) at gtm_txn.c:1230
#1 0x000000000040a3f3 in ProcessBkupBeginTransactionGetGXIDCommandMulti (
myport=0x86ce70, message=0x7fcd4e91bd70) at gtm_txn.c:1539
#2 0x0000000000405ee0 in ProcessTransactionCommand (myport=0x86ce70, 
mtype=MSG_BKUP_TXN_BEGIN_GETGXID_MULTI, message=0x7fcd4e91bd70)
at main.c:1487
#3 0x0000000000405966 in ProcessCommand (myport=0x86ce70, 
input_message=0x7fcd4e91bd70) at main.c:1238
#4 0x0000000000405748 in GTM_ThreadMain (argp=0x871020) at main.c:1112
#5 0x00000000004074f2 in GTM_ThreadMainWrapper (argp=0x871020)
at gtm_thread.c:343
#6 0x00007fcd5016c9ca in start_thread (arg=<value optimized out>)
at pthread_create.c:300
#7 0x00007fcd4fec9cdd in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#8 0x0000000000000000 in ?? ()
(gdb)

Server crash in case of WHERE CURRENT OF

steps:
1.create table:
CREATE TABLE uctest(f1 int, f2 text);
2.insert data
INSERT INTO uctest VALUES (1, 'one');
INSERT INTO uctest VALUES (2, 'two');
INSERT INTO uctest VALUES (3, 'three');
3.cursor:
BEGIN;
DECLARE c1 CURSOR FOR SELECT * FROM uctest ORDER BY f1;
FETCH 2 FROM c1;
DELETE FROM uctest WHERE CURRENT OF c1;
4.executed results:
server closed
5.core log:
Core was generated by `postgres: shench postgres [local] DELETE '.
Program terminated with signal 11, Segmentation fault.
[New process 23656]
#0 0x000000000056155f in pgxc_node_send_gxid (handle=0xaebed00, gxid=1749) at pgxcnode.c:1228
1228 handle->outBuffer[handle->outEnd++] = 'g';
(gdb) bt
#0 0x000000000056155f in pgxc_node_send_gxid (handle=0xaebed00, gxid=1749) at pgxcnode.c:1228
#1 0x0000000000565b2f in pgxc_node_begin (conn_count=1, connections=0x7fffc73b0ae0, gxid=1749) at execRemote.c:1452
#2 0x0000000000567bb0 in ExecRemoteQuery (node=0xafe5790) at execRemote.c:2976
#3 0x0000000000537bda in ExecProcNode (node=0xafe5790) at execProcnode.c:465
#4 0x0000000000536962 in ExecutePlan (estate=0xafe51e0, planstate=0xafe5790, operation=CMD_DELETE, numberTuples=0, 
direction=NoMovementScanDirection, dest=0xaf789b8) at execMain.c:1520
#5 0x000000000053764c in standard_ExecutorRun (queryDesc=0xafba2d0, direction=ForwardScanDirection, count=0) at execMain.c:312
#6 0x00000000005ed411 in ProcessQuery (plan=0xaf78690, sourceText=0xaf77440 "DELETE FROM uctest WHERE CURRENT OF c1;", 
params=<value optimized out>, dest=0xaf789b8, completionTag=0x7fffc73b1360 "") at pquery.c:199
#7 0x00000000005ed619 in PortalRunMulti (portal=0xafe31d0, isTopLevel=1 '\001', dest=0xaf789b8, altdest=0xaf789b8, 
completionTag=0x7fffc73b1360 "") at pquery.c:1287
#8 0x00000000005eddd5 in PortalRun (portal=0xafe31d0, count=9223372036854775807, isTopLevel=1 '\001', dest=0xaf789b8, 
altdest=0xaf789b8, completionTag=0x7fffc73b1360 "") at pquery.c:837
#9 0x00000000005e9181 in exec_simple_query (query_string=0xaf77440 "DELETE FROM uctest WHERE CURRENT OF c1;")
at postgres.c:1053
#10 0x00000000005ea796 in PostgresMain (argc=4, argv=<value optimized out>, username=0xaeba220 "shench") at postgres.c:3766
#11 0x00000000005c07ac in ServerLoop () at postmaster.c:3607
#12 0x00000000005c29fc in PostmasterMain (argc=7, argv=0xaeb76f0) at postmaster.c:1098
#13 0x000000000056d5ce in main (argc=7, argv=<value optimized out>) at main.c:188
6.expected results:
ERROR: cursor "c1" is not a simply updatable scan of table "uctest"
The bug is from pg_regress.

Avoid sending un-necessary describe message to datanodes

This was submitted by Abbas Butt, 2013-01-09.

See sourceforge bug tracker for details.

Consider the following two test cases:

CASE A:

create table t1(f1 int);
create or replace function cache_test(int) returns void as $$
begin
insert into t1 values($1);
end
$$ language plpgsql;
select cache_test(23);

Here the targetlist for the insert inside the function is not null, but we do not need to send describe message. However even if we do send describe message, the system works but this is an additional un-necessary message being sent to the datanodes.

CASE B:

create table t2(a int, b int);
INSERT INTO t2 VALUES(1,2);
create or replace function upd_test(int) returns void as $$
begin
update t2 set b=$1;
end
$$ language plpgsql;
select upd_test(23);

This is very similar to CASE A, but here again the targetlist for the update inside the function is not null but we do not need to send describe message.

Named prepared statement remain in pooler

This was submitted by Koichi Suzuki, 2013-01-23.

See sourceforge bug tracker for details.

See the following result. Named prepared statement remains in the pooler even after connection from application to a coordinator is disconnected.

--------8<-------- prepare-error2.sql -----------8<----------
-------------------
-- 0. drop table
-------------------
drop table if exists t1;
-------------------
-- 1. create table
-------------------
create table t1 (
c1 varchar(10)
,c2 varchar(10)
,c3 varchar(20)
,c4 varchar(4)
,constraint t1_pk primary key
(c1,c2,c3)
)
distribute by hash(C2);
-------------------
-- 2. insert
-------------------
insert into t1 values ('A1','111','111','0090');
-------------------
-- 3. prepare
-------------------
prepare foo (varchar, varchar, varchar, varchar) as
update t1 set c4 =
(select case when c4='0001' then '9999' else ltrim(to_char((cast(c4 as integer)-1),'0000')) end as c4 from t1 where c2 = $1 and c3 = $2)
where c2 = $3 and c3 = $4;
-------------------
-- 3. execute
-------------------
execute foo ('111','111','111','111');
------->8----------------->8-----------

The result

  1. 1st turn;
$ psql -p 50004 -h node06 db1 < prepare-error2.sql
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t1_pk" for table "t1"
CREATE TABLE
INSERT 0 1
PREPARE
ERROR: bind message supplies 4 parameters, but prepared statement "" requires 2

This was reported as bug 3601848

  1. 2nd turn
$ psql -p 50004 -h node06 db1 < prepare-error2.sql
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t1_pk" for table "t1"
CREATE TABLE
INSERT 0 1
PREPARE
ERROR: prepared statement "foo" already exists

Named prepared statement remains in the pooler.

WITH queries return "input of anonymous composite ty...."

This was submitted by Amit Khandekar, 2013-04-18.

See sourceforge bug tracker for details.

A query like :

WITH rcte AS ( SELECT sum(id) AS totalid FROM tab1 )
postgres-# UPDATE tab1 SET id = id + totalid FROM rcte

has some existing issue due to which it gives:

-ERROR: input of anonymous composite types is not implemented

But after trigger implementation, it would not error out, but in some cases it give wrong results like the e.g. shown below.

postgres=# \set REP 'distribute by hash(val) '
postgres=# CREATE TABLE tab1 ( id int, val text ) :REP ;
CREATE TABLE
postgres=# INSERT INTO tab1 VALUES ( 1, 'p1' );
INSERT 0 1
postgres=# INSERT INTO tab1 VALUES ( 11, 'c11' ),( 12, 'c12' );
INSERT 0 2
postgres=# INSERT INTO tab1 VALUES ( 23, 'c21' ),( 24, 'c22' );
INSERT 0 2
postgres=# WITH rcte AS ( SELECT sum(id) AS totalid FROM tab1 )
postgres-# UPDATE tab1 SET id = id + totalid FROM rcte;
UPDATE 5
postgres=# SELECT * FROM tab1 ORDER BY id;
id | val
----+-----
72 | p1
82 | c11
83 | c12
94 | c21
95 | c22
(5 rows)
postgres=# WITH wcte AS ( INSERT INTO tab1 VALUES ( 42, 'new' ) RETURNING id AS newid )
postgres-# UPDATE tab1 SET id = id + newid FROM wcte;
UPDATE 6
postgres=# SELECT * FROM tab1 ORDER BY id;
id | val
-----+-----
84 | new <---- This PG shows: 48 | new, not 84 | new.
114 | p1
124 | c11
125 | c12
136 | c21
137 | c22
(6 rows)

Above, the UPDATE also updates the latest inserted value 42 and sets its value to 42+newid = 84. While in PG, it does not update the inserted value.
After the trigger work, I am keeping this incorrect output in with_1.out. Currently there is no with_1.out; with_1.out itself had incorrect results.

DML with RETURNING on a replicated table

This was submitted by Abbas Butt, 2013-01-21.

See sourceforge bug tracker for details.

When a DML with RETURNING clause is fired on a replicated table we get rows from each datanode in reply, whereas the application is expecting only one row in reply. To solve this problem we are calling ExecProcNode on RemoteQueryState repeatedly to accept tuples from one datanode. However we cannot ensure that we will pick the same datanode to accept a tuple always. The current code would pick the tuple that is received first from whatever datanode. This means that it is possible that for a multi-insert case the tuple returned from first iteration was from the first datanode and the one returned from the second iteration was from the second because that somehow arrived first.

One way to solve this problem could be to add xc_node_id in the returning list in any case and then compare it with one of the nodes in RemoteQuery::exec_nodes->nodeList to make sure we pick tuples from one datanode always.

Another way suggested by Ashutosh is by using quals in the RemoteQuery plan and making sure we consume all the rows returned from the datanodes.

Cross-node constraint undetected

This was submitted by Koichi Suzuki, 2013-03-26.

See sourceforge bug tracker for details.

Attached file tries to define constraints which can be enforced only globally (across nodes). The error is not detected at ALTER TABLE but detected at INSERT.

COPY FROM does not copy data to PRIMARY datanode

This was submitted by Abbas Butt, 2013-04-27.

See sourceforge bug ttacker for details.

The problem is in version 1.0.2

awk 'BEGIN{for(i=1;i<=100000;i++) print i; }' > DATA
./psql postgres
psql (PGXC 1.0.2, based on PG 9.1.7)
Type "help" for help.
postgres=# create table foo (aaa int) distribute by replication;
CREATE TABLE
postgres=# \copy foo from DATA
postgres=# EXECUTE DIRECT ON data_node_1 'SELECT count(1) FROM foo';
count 
-------
1
(1 row)
postgres=# EXECUTE DIRECT ON data_node_2 'SELECT count(1) FROM foo';
count
--------
100000
(1 row)

Thanks to Paul Jones (pbj at cmicdo.com) for providing the bug report.

could not format inet value: pg_stat_replication on OSX

This was submitted by Michael Paquier, 2012-08-27.

See sourceforge bug tracker for details.

This error appears on OS X

INSERT INTO INET_TBL (c, i) VALUES ('10:23::f1', '10:23::f1/64');
+ ERROR: could not format inet value: Address family not supported by protocol family
INSERT INTO INET_TBL (c, i) VALUES ('10:23::8000/113', '10:23::ffff'); 
+ ERROR: could not format inet value: Address family not supported by protocol family 
INSERT INTO INET_TBL (c, i) VALUES ('::ffff:1.2.3.4', '::4.3.2.1/24');
+ ERROR: could not format inet value: Address family not supported by protocol family

Related to the same issue, it is not possible to run pg_stat_replication on OSX.

One FETCH is necessary for UPDATE/DELETE WHERE CURRENT OF

Original bug# 263, by Abbas Butt, 2011-12-20

In PG UPDATE/DELETE WHERE CURRENT OF work even if no row has been fetched from the cursor, in XC at least one FETCH is necessary.
This limitation is because of the way cursors are implemented in XC.
The test case involved here is portals.sql.

Temporary objects when session is reloaded

By Michael Paquier, 2012-06-13

See sourceforge bug tracker for details.

When a session is reloaded with pgxc_pool_reload, prepared objects are correctly dropped on session on local nodes and remote nodes, but temporary objects are only dropped on remote nodes, causing temporary tables to exist on local Coordinator but not on remote nodes.
It could be possible to drop temporary objects but in order to do that a transaction needs to be open when dropping temporary objects in the same way done as when proc is shut down. For the time being, temporary objects are dropped neither on the session that invocated pgxc_pool_reload nor on the other backend sessions.

Use node identifier to identify nodes every where

Originally, bug # 285 in sourceforge bug tracker, submitted by Abbas Butt, 2012-03-20.

See the above link for detailes.

We have three numbers all to identify nodes, node identifier, node oid, and node index in nodes array, it's hard to manage all of them. With node identifier, we should get rid of them all and use node identifier every where. This is a suggestion from Ashutosh.

CREATE TABLE AS passes even if there is a collation clash

This was submitted by Abbas Butt, 2012-04-20.

See sourceforge bug tracker for details.

Consider this test case

CREATE TABLE ct1 (a int,b text COLLATE "C" NOT NULL);
CREATE TABLE ct2 (a int,b text COLLATE "POSIX");
create table ct3 as select * from ct1 union all select * from ct2;
Expected results is
ERROR: no collation was derived for column "b" with collatable type text
HINT: Use the COLLATE clause to set the collation explicitly.
Obtained result is
INSERT 0 0

It should have failed.

PREPARE and EXECUTE do not work as expected

This was submitted by Amit Khandekar, 2013-03-14.

See sourceforge bug tracker for details.

When a user does and PREPARE and then EXECUTEs multiple times, the coordinator keeps on reparing and executing the query on datanode al times, as against preparing once and executing multiple times. This is because somehow the remote query is being prepared as an unnamed statement.

gtm_proxy segfaults on subsequent queries

This was submitted by knienart, 2013-05-08.

See sourceforge bug tracker for details.

In a setup of 3 datanodes and 1 coodinator on the same physical machine (for functional tests), DNs/Coordinator connect to the GTM using gtm_proxy a suggested by Koichi Suzuki to lessen the load on GTM. There are cases when 1st query after connecting works, but the subsequent eecution causes segfault of gtm_proxy:

Starting program: /d00/pgxc/bin/gtm_proxy -D data/gtm_proxy/
[Thread debugging using libthread_db enabled]
[New Thread 0x7ffff7ddf700 (LWP 23158)]
[New Thread 0x7ffff73de700 (LWP 23159)]
[New Thread 0x7ffff69dd700 (LWP 23160)]
[New Thread 0x7ffff5fbb700 (LWP 23163)]
[New Thread 0x7ffff54d7700 (LWP 23164)]
Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff7ddf700 (LWP 23158)]
0x00000036a7a328a5 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.80.el6_3.6.x86_64
(gdb) bt
#0 0x00000036a7a328a5 in raise () from /lib64/libc.so.6
#1 0x00000036a7a34085 in abort () from /lib64/libc.so.6
#2 0x000000000040c517 in errfinish (dummy=<value optimized out>) at elog.c:368
#3 0x000000000040c9c3 in elog_finish (elevel=<value optimized out>, fmt=<value optimized out>) at elog.c:629
#4 0x0000000000419c1f in ProcessResponse (thrinfo=<value optimized out>, cmdinfo=0x7ffff0026fb0, res=0x7ffff0028ef0) at proxy_main.c:1899
#5 0x000000000041abef in GTMProxy_ThreadMain (argp=0x64b290) at proxy_main.c:1500
#6 0x000000000041d91b in GTMProxy_ThreadMainWrapper (argp=0x64b290) at proxy_thread.c:316
#7 0x00000036a7e07851 in start_thread () from /lib64/libpthread.so.0
#8 0x00000036a7ae811d in clone () from /lib64/libc.so.6
(gdb)
I bumped the number of worker_threads to 5 but it did not help, not sure if it's a misconfiguration or a genuine bug.
the log of the gtm_proxy looks like:
1:140737308882688:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737351907072:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.157 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737351907072:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737351907072:2013-05-07 17:30:51.158 CEST -ERROR: Failed to Register node
LOCATION: ProcessPGXCNodeCommand, proxy_main.c:2138
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737341417216:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737330927360:2013-05-07 17:30:51.158 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737351907072:2013-05-07 17:30:51.158 CEST -PANIC: Invalid response or synchronization loss
LOCATION: ProcessResponse, proxy_main.c:1899
1:140737341417216:2013-05-07 17:30:51.159 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
1:140737308882688:2013-05-07 17:30:51.159 CEST -LOG: Node with the given ID number already exists
LOCATION: pgxcnode_add_info, register_common.c:249
and the config file:
# GENERAL PARAMETERS
#------------------------------------------------------------------------------
nodename = 'gtm_proxy1' # Specifies the node name.
# (changes requires restart)
listen_addresses = '*' # Listen addresses of this GTM.
# (changes requires restart)
port = 6666 # Port number of this GTM.
# (changes requires restart)
#------------------------------------------------------------------------------
# GTM PROXY PARAMETERS
#------------------------------------------------------------------------------
worker_threads = 5 # Number of the worker thread of this
# GTM proxy
# (changes requires restart)
#------------------------------------------------------------------------------
# GTM CONNECTION PARAMETERS
#------------------------------------------------------------------------------
# Those parameters are used to connect to a GTM server
gtm_host = 'localhost' # Listen address of the active GTM.
# (changes requires restart)
gtm_port = 20001 # Port number of the active GTM.
# (changes requires restart)
#------------------------------------------------------------------------------
# Behavior at GTM communication error
#------------------------------------------------------------------------------
gtm_connect_retry_interval = 2 # How long (in secs) to wait until the next
# retry to connect to GTM.
#
#
#------------------------------------------------------------------------------
# Other options
#------------------------------------------------------------------------------
#keepalives_idle = 0 # Keepalives_idle parameter.
#keepalives_interval = 0 # Keepalives_interval parameter.
#keepalives_count = 0 # Keepalives_count internal parameter.
#log_file = 'gtm_proxy.log' # Log file name
#log_min_messages = WARNING # log_min_messages. Default WARNING.
# Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3,
# DEBUG2, DEBUG1, INFO, NOTICE, WARNING,
# ERROR, LOG, FATAL, PANIC.

Cheers,
Chris

PREPARE and EXECUTE do not work as expected

This was submitted by Amit Khandekar, 2013-03-14.

See sourceforge bug tracker for details.

When a user does and PREPARE and then EXECUTEs multiple times, the coordinator keeps on reparing and executing the query on datanode al times, as against preparing once and executing multiple times. This is because somehow the remote query is being prepared as an unnamed statement.

Failure of test select_views, problem with leaky scenarios

This was submitted by Michael Paquier, 2012-08-01.

See sourceforge bug tracker for details.

bug found during merge with 9.2.

select_views has a set of new tests for leaky scenarios.

For more or less all the queries involved there, it is not possible to get permissions to the relation.

Here is the example of 1 problem.

+ SELECT * FROM my_credit_card_usage_normal 
+ WHERE f_leak(cnum) AND ymd >= '2011-10-01' AND ymd < '2011-11-01'; 
+ ERROR: permission denied for relation customer

For multi-step queries WHERE CURRENT OF does not work

Original Bug# 260, by Abbas Butt, 2011-12-20.

See sourceforge bug tracker for details.

For any query where the FQS planner decides that this is a multi-step query and needs both coordinator and data nodes UPDATE/DELETE using WHERE CURRENT OF does not work.

The test case involved here is portals.sql

The test case has one failure due to this limitation.

DECLARE c1 CURSOR FOR SELECT a.f1,a.f2,b.f1,b.f2 FROM uctest a, uctest b WHERE a.f1 = b.f1 + 5 ORDER BY 1 FOR SHARE OF a;
FETCH 1 FROM c1;
UPDATE uctest SET f1 = f1 + 10 WHERE CURRENT OF c1;

CREATE TABLE AS ... with aggregate

This was submitted by Koichi Suzuki, 2013-02-01.

See sourceforge bug tracker for details.

Report from Venky Kandaswamy:

------------8<-------------------------------8<-----------
Lets say we run this query in Postgres-XC (latest master):
select 
msn_account_name "MsnAccountName", 
avg(msnavgpos)::Numeric(9,2) "MsnAveragePosition"
from bi6121.dm_sem_performance_msn_32 
where date_id between 20121001 and 20121231 
group by msn_account_name;
We get the result:
"MSN_RTW_Adchemy";4.06
But if we try to create a table out of the result, we get an error:
create table temp1 as
select 
msn_account_name "MsnAccountName", 
avg(msnavgpos)::Numeric(9,2) "MsnAveragePosition"
from bi6121.dm_sem_performance_msn_32 
where date_id between 20121001 and 20121231 
group by msn_account_name;
ERROR: function pg_catalog.numeric_avg(numeric) does not exist
LINE 3: msn_account_name "MsnAccountName", 
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
********** Error **********
ERROR: function pg_catalog.numeric_avg(numeric) does not exist
SQL state: 42883
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Character: 65
------------>8------------------------------->8-----------

I suspect the above error is related to the difference of aggregate function infrastructure in XC from PG, additional cfunc (collection function).

OpenBSD porting

This was submitted by Oleksiy Lukin, 2012-02-10.

See sourceforge bug tracker for details.

While porting pgxc to OpenBSD for testing purposes, I found following defects:

  1. EPROTO constant in gtm code. This constatnt seems absent in BSD, so i solved it by adding following line to src/include/port/openbsd.h:
    #define EPROTO EIO
  2. u_int8_t and similar constants not defined in pqcomm.c and others. Solved by adding
    #include <sys/types.h> to src/include/pqcomm.h
  3. BSD port system wants untarred distribution directorly name with version numper. Could you please consider making not simply "pgxc" dir inside of pgxc_vX.Y.Z.tgz but something like pgxc-X.Y.Z inside of pgxc-X.Y.Z.tgz for future releases.
  4. src/pl/plpgsql does not want to compile at all because grammar has errors at least from point of view of OpenBSD's bison.

Solved this problem by taking code from postgresql-9.1.2.tgz which contains generated files.
Changes are trivial, so i do not post patches here. Thnks!

ALTER NODE primary node

This was submitted by Koichi Suzuki, 2012-11-30.

See sourceforge bug tracker for details.

When Primary datanode fails, failover is successful. When ALTER NODE is issued to reflect new datanode location, it fails as follows:

# ALTER NODE datanode1 WITH (HOST='node07', PORT=20008);
psql:/tmp/cmd.sql:1: ERROR: PGXC node datanode1: two nodes cannot be primary

This statement should succeed because there's no change in primary node, just changing it's location.

Original list of the nodes is as follows:

koichi=# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id 
-----------+-----------+-----------+-----------+----------------+------------------+-------------
coord1 | C | 20004 | node06 | f | f | 1885696643
coord2 | C | 20005 | node07 | f | f | -1197102633
coord3 | C | 20004 | node08 | f | f | 1638403545
coord4 | C | 20005 | node09 | f | f | -890157977
datanode1 | D | 20008 | node06 | t | f | 888802358
datanode2 | D | 20009 | node07 | f | f | -905831925
datanode3 | D | 20008 | node08 | f | t | -1894792127
datanode4 | D | 20009 | node09 | f | f | -1307323892
(8 rows)
koichi=#

Multiple primary nodes in XC

This was submitted by Michael Paquier, 2012-07-24.

See sourceforge bug tracker for details.

Now in XC it is possible to define multiple primary nodes with CREATE NODE, however only one node is really registered in the session parameters.

Multiple nodes are basically necessary because some tables might be distributed on a subset of nodes that do not contain primary.

So what is necessary here is to add support for multiple primary nodes, and enforce the correct selection of a primary node if it is contained.

In case no primary node can be found in the node list of relation, we need to pick up the 1st one.

CREATE functions with views: fails at creation

This was submitted by Michael Paquier, 2012-06-20.

See sourceforge bug tracker for details.

Reported by Dimitrije Radojevic

  1. create a table
CREATE TABLE test_table (
id SERIAL NOT NULL,
other_id int4 NOT NULL,
PRIMARY KEY("id") )
DISTRIBUTE BY REPLICATION;
  1. create a view
CREATE VIEW some_view AS SELECT test_table.* FROM test_table;
  1. create a function using the view
CREATE OR REPLACE FUNCTION some_function() RETURNS SETOF some_view AS
$body$
BEGIN
RETURN QUERY SELECT * FROM some_view;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;

I get the following error:

ERROR: type "some_view" does not exist

This runs fine in postgres 9.1.4.

Do DML remote query and data params at same place

This was submitted by Amit Khandekar, 2013-04-16.

See sourceforge bug tracker for details.

Currently the query param markers are placed in the order of ctid and xc_node_id. And the same order is ensured at a different place when BIND data row is generated. If possible, these two operations should use some common function/location where both the tasks are done so that it reduces the changes of the order of ctid and xc_node_id ending up differently at these two places.

prepared insert on round robin table

By Michael Paquier, 2011-11-16. See sourceforge bug tracker and sourceforge feature request for details.

When determining a node for a round robin table the same node is always targeted in the case of a prepared insert.
The node targeted is determined at prepare state and not changed after.

postgres=# create table aa (a int) distribute by round robin;
CREATE TABLE
postgres=# prepare aa as insert into aa values ($1);
PREPARE
postgres=# execute aa (1);
INSERT 0 1
postgres=# execute aa (2);
INSERT 0 1
postgres=# execute aa (3);
INSERT 0 1
postgres=# execute aa (4);
INSERT 0 1
postgres=# execute direct on node dn1 'select * from aa';
a 

---
(0 rows)
postgres=# execute direct on node dn2 'select * from aa';
a 

---
1
2
3
4
(4 rows)

Fault start at gtm_standby w/o connection with GTM

This was submitted by Markn, 2013-05-08.

See sourceforge bug tracker for details.

gtm_standby started without error response although gtm_standby has failed to establish connection with gtm active from gtm.log

A issue dissusced at following thread.

[Postgres-xc-general] GTM standby behaivor when cannot connect with GTM active

Pooler warnings with coord/dn slaves

This was submitted by Koichi Suzuki, 2013-01-23.

See sourceforge bug tracker for details.

With coordinator/datanode slaves, we have the following WARNING in the server log occasionally.
Unexpected data on connection, cleaning.
Without slaves, we don't have it.
There could be some potential issue with this.

Network error detection at gtm_proxy

This was submitted by Koichi Suzuki, 2013-01-30.

See sourceforge bug tracker for details.

keepalives_* configuration parameters were dropped from 1.1 development line. Without this, TCP error detection relies only to the kernel setup and it will take very long (up to a couple of hours) to detect network error. Dropping these parameters simplifies the whole configuration but some quicker error detection should be implemented.

Plan targetlist is null but we need to send describe message

This was submitted by Abbas Butt, 2013-01-09.

See sourceforge bug tracker for details.

Consider this test case

CREATE TABLE junk2 (a int, b int);
create or replace function blockme() returns int as $$
declare x int;
begin
select count(*) into x from junk2 a, junk2 b;
return x;
end$$ language plpgsql;
select blockme();

The query inside the function is planned like this

LOG: duration: 3.901 ms plan:
Query Text: select count(*) from junk2 a, junk2 b
Aggregate (cost=1.25..1.26 rows=1 width=0)
Output: count(*)
-> Nested Loop (cost=0.00..1.00 rows=100 width=0)
-> Data Node Scan on junk2 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=0)
Node/s: data_node_1, data_node_2
Remote query: SELECT * FROM ONLY junk2 a WHERE true
-> Data Node Scan on junk2 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=0)
Node/s: data_node_1, data_node_2
Remote query: SELECT * FROM ONLY junk2 b WHERE true

Note the two data node scans under the nested loop.

The problem is that these data node scans have a NULL step->scan.plan.targetlist.

The reason identified by Ashutosh is as follows:

For count(*) we don't need any column values, so nothing is fetched, but just the existence of a row is sufficient. Hence targetlist is NULL, since we don't project anything but the status that row exists (in the form of tuple slot). We need to fix this situation in XC, since we are unnecessarily converting a NULL tlist into * and fetching all the columns of the table (which can be a performance problem and a column permission problem.)

Why was this problem not uncovered when we worked on the JOIN planner?

  1. The earlier code has patched this up in get_target_list() by converting a NULL targetlist into * while deparsing the query. We don't want to do that.
  2. There is code in executor which allows if the number of columns we got from datanode are more than the ones in the scan tuple descriptor.

Count(*) within prepare incorrect

This was submitted by Koichi Suzuki, 2013-02-05.

See sourceforge bug tracker for details.

count(*) within prepared statement returns incorrect result, for both 1.0 and master.

--------8<---- (master) -----------------8<----------------------
$ createdb -p 50004 -h node06 db
$ psql -a -p 50004 -h node06 -f PREPARE-BUG-master.sql db
CREATE TABLE T1 (
C1 varchar,
C2 varchar,
C3 varchar,
C4 varchar,
C5 date
)
DISTRIBUTE BY HASH(C3) ;
CREATE TABLE
CREATE TABLE T2 (
C1 varchar,
C2 varchar,
C3 varchar,
C4 varchar,
C5 date
)
DISTRIBUTE BY HASH(C3) ;
CREATE TABLE
SELECT * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+-----------+----------------+------------------+-------------
coord1 | C | 50004 | node06 | f | f | 1885696643
coord2 | C | 50005 | node07 | f | f | -1197102633
coord3 | C | 50004 | node08 | f | f | 1638403545
coord4 | C | 50005 | node09 | f | f | -890157977
datanode1 | D | 50008 | node06 | f | t | 888802358
datanode2 | D | 50009 | node07 | f | f | -905831925
datanode3 | D | 50008 | node08 | f | f | -1894792127
datanode4 | D | 50009 | node09 | f | f | -1307323892
(8 rows)
INSERT INTO T2 VALUES ( 'c1' , 'c2' , 'A' , 'PGXC' , '2013-01-01');
INSERT 0 1
INSERT INTO T1 VALUES ( 'c1' , 'c2' , 'A' , 'PGXC' , '2013-01-01');
INSERT 0 1
PREPARE cnt (VARCHAR, VARCHAR) AS
SELECT COUNT(*) FROM T1 WHERE C3 = $1 AND C4 = $2 ;
PREPARE
PREPARE del (VARCHAR, VARCHAR) AS
DELETE FROM T1 WHERE C3 = $1 AND C4 = $2 ;
PREPARE
PREPARE ins (VARCHAR, VARCHAR) AS
INSERT INTO T1 SELECT * FROM T2 WHERE C3 = $1 AND C4 = $2 ;
PREPARE
PREPARE upd (VARCHAR, VARCHAR, VARCHAR, VARCHAR) AS
UPDATE T1 SET C1 = $1 , C2 = $2 , C5 = DATE_TRUNC('second',clock_timestamp()) WHERE C3 = $3 AND C4 = $4 ;
PREPARE
PREPARE sel (VARCHAR, VARCHAR) AS
SELECT C2 ,C1 FROM T1 WHERE C3 = $1 AND C4 = $2 ;
PREPARE
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
UPDATE 1
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
UPDATE 1
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
UPDATE 1
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
UPDATE 1
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
UPDATE 1
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
psql:PREPARE-BUG-master.sql:72: ERROR: there is no parameter $3
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
execute cnt('A','PGXC');
count
-------
1
(1 row)
execute del('A','PGXC');
DELETE 1
execute ins('A','PGXC');
INSERT 0 1
execute upd('c1','c2','A','PGXC');
psql:PREPARE-BUG-master.sql:78: ERROR: there is no parameter $3
execute sel('A','PGXC');
c2 | c1
----+----
c2 | c1
(1 row)
----8<---- (rel1.0) -----------8<----------------------
$ createdb -p 50004 -h node06 db
$ psql -a -p 50004 -h node06 -f PREPARE-BUG-REL1_0.sql db
-- craete table
CREATE TABLE T1 (
C1 varchar,
C2 varchar,
C3 varchar,
C4 varchar,
C5 date
)
DISTRIBUTE BY HASH(C3);
CREATE TABLE
-- select pgxc_nodes
SELECT * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+-----------+----------------+------------------+-------------
coord1 | C | 50004 | node06 | f | f | 1885696643
coord2 | C | 50005 | node07 | f | f | -1197102633
coord3 | C | 50004 | node08 | f | f | 1638403545
coord4 | C | 50005 | node09 | f | f | -890157977
datanode1 | D | 50008 | node06 | f | t | 888802358
datanode2 | D | 50009 | node07 | f | f | -905831925
datanode3 | D | 50008 | node08 | f | f | -1894792127
datanode4 | D | 50009 | node09 | f | f | -1307323892
(8 rows)
-- insert
INSERT INTO T1 VALUES ( 'c1' , 'c2' , 'A' , 'PGXC' , '2013-01-01');
INSERT 0 1
-- prepare
PREPARE cnt (VARCHAR, VARCHAR) AS
SELECT COUNT(*) FROM T1 WHERE C3 = $1 AND C4 = $2;
PREPARE
-- select count t1
SELECT COUNT(*) FROM T1 WHERE C3='A' AND C4='PGXC';
count
-------
1
(1 row)
-- execute cnt
execute cnt('A','PGXC');
count
-------
4
(1 row)
------>8------------------------>8------------------

A few problems with whole row references

This was submitted by Abbas Butt, 2012-04-27.

See sourceforge bug tracker for details.

Consider these TWO test cases

CREATE TABLE price (id INT PRIMARY KEY, active BOOLEAN NOT NULL, price NUMERIC);
insert into price values (1,false,42), (10,false,100), (11,true,17.99);
select row(price.*) from price;
Expected output
row 
--------------
(1,f,42)
(10,f,100)
(11,t,17.99)
(3 rows)
Obtained output
ERROR: input of anonymous composite types is not implemented

If we use explicit type cast it works fine i.e.

`select row(price.*)::price from price;`

produces correct results.

Now consider this test case

create table "my table"(a int, b int);
insert into "my table" values(1,11),(2,22),(3,33);
select "my table".*::"my table" from "my table";
Expected Output
my table 
----------
(1,11)
(2,22)
(3,33)
(3 rows)

Obtained Output

ERROR: type ""my table"" does not exist

No responce at EXPLAIN ANALYZE to LEFT OUTER JOIN

This was submitted by Markn, 2013-05-08

See sourceforge bug tracker for details.

Tested at : PG-XC v1.0.1
Postgres-XC branch, REL1_0_STABLE,updated. XC1_0_1_PG9_1-38-g3bcdab7
Commit ID : 3bcdab7
Observed version
PG-XC v1.0.2
PG-XC v1.0.3
Not observed :
PG-XC v1.0.1
PG-XC v1.1
Environment :
 Server 1 : GTM
Server 2 : GTM-Proxy & Coordinator
Server 3 : GTM-Proxy & Datanode1
Server 4 : GTM-Proxy & Datanode2
 Server 5 : GTM-Proxy & Datanode3
 Server 6 : GTM-Proxy & Datanode4
Test set :
DBT-2
Procedure :

postgres@pgxc01 scripts]$ psql -h 192.168.1.3 dbt2
psql (PGXC 1.0.2, based on PG 9.1.7)
Type "help" for help.
dbt2=# EXPLAIN ANALYZE SELECT * FROM item LEFT OUTER JOIN stock ON i_id = s_i_id where s_i_id < 2001 and s_w_id < 201;
--- No responce at least 10 minutes ---
[postgres@pgxc01 install_coord_perf]$ psql -h 192.168.1.3
psql (PGXC 1.0.2, based on PG 9.1.7)
Type "help" for help.
postgres=# SELECT
procpid,
start,
now() - start AS lap,
current_query
FROM
(SELECT
backendid,
pg_stat_get_backend_pid(S.backendid) AS procpid,
pg_stat_get_backend_activity_start(S.backendid) AS start,
pg_stat_get_backend_activity(S.backendid) AS current_query
FROM
(SELECT pg_stat_get_backend_idset() AS backendid) AS S
) AS S
WHERE
current_query <> ''
ORDER BY
lap DESC LIMIT 1;
procpid | start | lap | current_query
---------+-------------------------------+-----------------+-----------------------------------------------------------------
------------------------------------------------
9673 | 2013-03-07 12:22:54.929863+09 | 00:11:52.115181 | EXPLAIN ANALYZE SELECT * FROM item LEFT OUTER JOIN stock ON i_id
= s_i_id where s_i_id < 2001 and s_w_id < 201;
(1 row)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.