Giter Site home page Giter Site logo

clickhouse-go's Introduction

Website Apache 2.0 License

The ClickHouse company logo.

ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real-time.

How To Install (Linux, macOS, FreeBSD)

curl https://clickhouse.com/ | sh

Useful Links

  • Official website has a quick high-level overview of ClickHouse on the main page.
  • ClickHouse Cloud ClickHouse as a service, built by the creators and maintainers.
  • Tutorial shows how to set up and query a small ClickHouse cluster.
  • Documentation provides more in-depth information.
  • YouTube channel has a lot of content about ClickHouse in video format.
  • Slack and Telegram allow chatting with ClickHouse users in real-time.
  • Blog contains various ClickHouse-related articles, as well as announcements and reports about events.
  • Code Browser (github.dev) with syntax highlighting, powered by github.dev.
  • Contacts can help to get your questions answered if there are any.

Monthly Release & Community Call

Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know.

Upcoming Events

Keep an eye out for upcoming meetups and events around the world. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com. You can also peruse ClickHouse Events for a list of all upcoming trainings, meetups, speaking engagements, etc.

  • MORE COMING SOON!

Recent Recordings

  • Recent Meetup Videos: Meetup Playlist Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
  • Recording available: v24.4 Release Call All the features of 24.4, one convenient video! Watch it now!

Interested in joining ClickHouse and making it your full-time job?

We are a globally diverse and distributed team, united behind a common goal of creating industry-leading, real-time analytics. Here, you will have an opportunity to solve some of the most cutting-edge technical challenges and have direct ownership of your work and vision. If you are a contributor by nature, a thinker and a doer - we’ll definitely click!

Check out our current openings here: https://clickhouse.com/company/careers

Can't find what you are looking for, but want to let us know you are interested in joining ClickHouse? Email [email protected]!

clickhouse-go's People

Contributors

aleksi avatar alexey-milovidov avatar alvarotuso avatar blinkov avatar ch-integrations-robot avatar crisismaple avatar dependabot[bot] avatar dguang21 avatar epicstep avatar ernado avatar fiftin avatar genzgd avatar gingerwizard avatar jakesylvestre avatar jkaflik avatar jtroy avatar kshvakov avatar kua avatar markandrus avatar mshustov avatar neverlee avatar orloffv avatar ortyomka avatar pikot avatar rogeryk avatar rtkaratekid avatar sundy-li avatar vavrusa avatar veqryn avatar zxc111 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clickhouse-go's Issues

Invalid number of arguments

Trying to run this code

	_, err = tx.Exec(
		"INSERT INTO event (id, created_at, url_path) VALUES (?, ?, ?), (?, ?, ?)", 
		1, now, "a",
		2, now, "b",
	)

got error,

Received unexpected error "sql: expected 3 arguments, got 6"

The only solution I have found is to use statement with 3 arguments and do insert twice(

Error parsing `GROUP BY :group_by`

the error is sql: statement expects 0 inputs; got 1, i am running this with sqlx

pseudo code

type Res struct {
   A string `db:"A"`,
   Count uint64 `db:"Count"`
}

res := []Res{}
stmt := db.PrepareNamed("select :group_by as A, count() as Count from table group by :group_by")
stmt.Select(&res, map[string]interface{}{
    "group_by": "A"
})

go get error

image

➜ data git:(master) go version go version go1.7rc6 darwin/amd64

toColumn type fails on high-batch load

Observed behaviour: on high batch load toColumn types randomly fails
Expected behaviour: no failure while database is up

Environment:

go version go1.8.1 darwin/amd64
Golang SQLDriver 1.1.54213
ClickHouse 1.1.54140 (Universal)
Docker version 17.03.1-ce, build c6d412e
CPU: 2.3 GHz Intel Core i7
RAM: 16GB

Clickhouse is running inside docker on Mac on the same machine.
Docker has 8Gb of RAM allocated.

Steps to reproduce

  1. Copy this gist and save it as main.go https://gist.github.com/kenota/513e304805479a5f40108618bbe45678
  2. Start clickhouse, for example using docker: docker run -p 8123:8123 -p 9000:9000 -d --name click2 --ulimit nofile=262144:262144 yandex/clickhouse-server
  3. Run test code and wait (should be under 1 minute) for panic to happen: go run main.go

Using code provided I am able to receive reliable panics under 10 minutes running on my machine:

Here are sample panics:

go-click-test kenota$ go run main.go
2017/05/24 21:22:59 Test is running...
panic: func toColumnType: unhandled type nt16

goroutine 7 [running]:
main.getTemplate(0x1225d20, 0xc4200b0000, 0x0, 0x0, 0x0)
	.../main.go:59 +0xa5
main.worker(0x0)
	.../main.go:79 +0x1d2
created by main.main
	.../main.go:108 +0x56
exit status 2
go-click-test kenota$ go run main.go
2017/05/24 21:23:51 Test is running...
panic: func toColumnType: unhandled type nt16

goroutine 7 [running]:
main.getTemplate(0x1225d20, 0xc4200b0000, 0x0, 0x0, 0x0)
	.../main.go:59 +0xa5
main.worker(0x0)
	.../main.go:79 +0x1d2
created by main.main
	.../main.go:108 +0x56
exit status 2

Some times in takes longer, sometimes it happens faster.

I was struggling to reproduce it on sample code at all before I made a change. In sample code in worker function there is perIteration variable. Initially I set it to 1000 and was not able to trigger the issue. Then I realised that in my real app buffer is not always filled to 1000 ( the limit of the buffer) and there are quite a lot of batches with only ~hundreds events in them. I have changed the code to randomise number of iterations and include lower range and viola -- the error starts happening.

While I was creating this ticket I decided to try and simplify code even more. You will see that there is requests array. In the beginning there were 3 go routines doing inserts into 3 different tables. But I decided to switch it to one and bug still happening.

This drives me to conclusion that it is somehow related to frequency of Prepare()/Commit() and number of calls to Execs. Perhaps this is even bug in clickhouse? It seems that for batch inserts it is creating temporary table and then merging it in.

Add context support

Please correct me if I am wrong, but now the driver implements interfaces to support context, but actually it does not provide full support for context.Context.

Tag releases

Is it possible to start tagging releases? We are looking into integrating a dep tool and having a version to refer to would be really beneficial.

Memory leak in case of error from Clickhouse

In case of Clickhouse error, we got unfreed memory, because of uncommited transaction handler.

Reproduce example

package main

import (
	"database/sql"
	"fmt"
	"time"

	_ "github.com/kshvakov/clickhouse"
)

var db *sql.DB

func main() {
	db, _ = sql.Open("clickhouse", "tcp://localhost:9000/?write_timeout=60")

	db.Exec("CREATE TABLE default.some_test_table ( dt_part Date, dt DateTime, name String ) ENGINE=MergeTree( dt_part, (dt), 8192 );")

	for i := 0; i < 100; i++ {
		send()
	}

	db.Exec("DROP TABLE default.some_test_table;")

	fmt.Println("done")

	// Here we got near 700 MB of used memory
	select {}
}

func send() {
	tx, _ := db.Begin()

	stmt, _ := tx.Prepare("INSERT INTO default.some_test_table VALUES(?,?,?);")

	// Here we insert incorrect value
	_, err := stmt.Exec(time.Now(), time.Now(), nil)

	// Got error
	if err != nil {
		// Do return and after that we have unclosed tx or stmt with not cleared memory
		return
	}
}

If we try to free tx and add tx.Rollback() in this example - we got runtime error.

Insertition using format

Using clickhouse-client I am able to do insert requests like

INSERT INTO mydb.events FORMAT CSV 8cd3e02c-83b1-41f0-8e29-c1681f03feb0,1a9cbce7-0788-430e-b3e7-15bbbea2ac27,3,0,0,0.000010,0.200000,,2017-02-27,1488203857,4,3,3,4,0,0,4,0,,,Unknown,Linux Unknown,0,0,[],,,0,0,0,,127.0.0.1,0,0,0,,0,,0,0,0,,,,,,,,,,,,,[]

But when I`m trying to perform this request using your driver I have panic.

My code

package main

import (
	"database/sql"
	"fmt"
	"github.com/kshvakov/clickhouse"
	"log"
)

func main() {
	connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
	if err != nil {
		log.Fatal(err)
	}
	if err := connect.Ping(); err != nil {
		if exception, ok := err.(*clickhouse.Exception); ok {
			fmt.Printf("[%d] %s \n%s\n", exception.Code, exception.Message, exception.StackTrace)
		} else {
			fmt.Println(err)
		}
		return
	}
	tx, err := connect.Begin()
	if err != nil {
		log.Fatal(err)
	}
	_, err = tx.Exec(`INSERT INTO mydb.events FORMAT CSV 8cd3e02c-83b1-41f0-8e29-c1681f03feb0,1a9cbce7-0788-430e-b3e7-15bbbea2ac27,3,0,0,0.000010,0.200000,,2017-02-27,1488203857,4,3,3,4,0,0,4,0,,,Unknown,Linux Unknown,0,0,[],,,0,0,0,,127.0.0.1,0,0,0,,0,,0,0,0,,,,,,,,,,,,,[]`)
	if err != nil {
		log.Fatal(err)
	}
	err = tx.Commit()
	if err != nil {
		log.Fatal(err)
	}
}

Panic:

[clickhouse]host(s)=127.0.0.1:9000, database=default, username=default
[clickhouse][hello] -> Golang SQLDriver 1.1.54126
[clickhouse][hello] <- ClickHouse 1.1.54164 (Europe/Kiev)
[clickhouse]-> ping
[clickhouse]<- pong
[clickhouse][begin] tx=false, data=false
[clickhouse][prepare] INSERT INTO trackmanic2.events FORMAT CSV 8cd3e02c-83b1-41f0-8e29-c1681f03feb0,1a9cbce7-0788-430e-b3e7-15bbbea2ac27,3,0,0,0.000010,0.200000,,2017-02-27,1488203857,4,3,3,4,0,0,4,0,,,Unknown,Linux Unknown,0,0,[],,,0,0,0,,127.0.0.1,0,0,0,,0,,0,0,0,,,,,,,,,,,,,[]
[clickhouse][send query] INSERT INTO trackmanic2.events FORMAT CSV 8cd3e02c-83b1-41f0-8e29-c1681f03feb0,1a9cbce7-0788-430e-b3e7-15bbbea2ac27,3,0,0,0.000010,0.200000,,2017-02-27,1488203857,4,3,3,4,0,0,4,0,,,Unknown,Linux Unknown,0,0,[],,,0,0,0,,127.0.0.1,0,0,0,,0,,0,0,0,,,,,,,,,,,,,[] VALUES 
[clickhouse][stmt] close
panic: runtime error: index out of range

goroutine 1 [running]:
github.com/kshvakov/clickhouse.(*block).append(0xc4200d6000, 0x661ec0, 0x0, 0x0, 0x6493c8, 0x561ec0)
        /home/dmitry/golang/src/github.com/kshvakov/clickhouse/block.go:203 +0x17c8
github.com/kshvakov/clickhouse.(*stmt).execContext(0xc4200a4180, 0x632300, 0xc420010270, 0x661ec0, 0x0, 0x0, 0x1000000005288bf, 0x10175ce911b, 0x2, 0x103)
        /home/dmitry/golang/src/github.com/kshvakov/clickhouse/stmt.go:31 +0x60
github.com/kshvakov/clickhouse.(*stmt).ExecContext(0xc4200a4180, 0x632300, 0xc420010270, 0x661ec0, 0x0, 0x0, 0xc400000008, 0xc4200b6040, 0xc42004fb88, 0x4b97c5)
        /home/dmitry/golang/src/github.com/kshvakov/clickhouse/clickhouse_go18.go:38 +0x1aa
database/sql.ctxDriverStmtExec(0x632300, 0xc420010270, 0x632400, 0xc4200a4180, 0x661ec0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/local/go/src/database/sql/ctxutil.go:66 +0x244
database/sql.resultFromStatement(0x632300, 0xc420010270, 0xc42004fd38, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/database/sql/sql.go:1841 +0x193
database/sql.(*Tx).ExecContext(0xc4200c0000, 0x632300, 0xc420010270, 0x59469f, 0x103, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/local/go/src/database/sql/sql.go:1710 +0x31f
database/sql.(*Tx).Exec(0xc4200c0000, 0x59469f, 0x103, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x58c828)
        /usr/local/go/src/database/sql/sql.go:1716 +0x85
main.main()
        /home/dmitry/golang/src/main/main.go:27 +0x341

How can I manage with this problem?

V1.2+ breaks support for Enum8

Steps to reproduce. Run this code with any 1.2 version of library:

package main

import (
	"database/sql"
	"fmt"
	"log"
	"time"

	_ "github.com/kshvakov/clickhouse"
)

const (
	AA = iota + 1
	BB
	CC
	DD
)

func main() {
	connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
	if err != nil {
		log.Fatal(err)
	}

	_, err = connect.Exec(`
		CREATE TABLE IF NOT EXISTS example (
			type Enum8('aa' = 1, 'bb' = 2, 'cc' = 3, 'dd' = 4),
			action_day   Date,
			action_time  DateTime
		) engine=Memory
	`)

	if err != nil {
		log.Fatal(err)
	}
	var (
		tx, _   = connect.Begin()
		stmt, _ = tx.Prepare("INSERT INTO example (type, action_day, action_time) VALUES (?, ?, ?)")
	)

	for i := 0; i < 10; i++ {
		if _, err := stmt.Exec(
			AA,
			time.Now(),
			time.Now(),
		); err != nil {
			log.Fatal(err)
		}
	}

	if err := tx.Commit(); err != nil {
		log.Fatal(err)
	}

	fmt.Println("SUCCESS!")
}

the stmt.Exec returns this error:

type (Enum8('aa' = 1, 'bb' = 2, 'cc' = 3, 'dd' = 4)): unexpected type int64

The same code works fine on version 1.1

Many thanks

Client times out when a string is inserted into a UInt64

I have this field on a table ClientVersion UInt64,. When inserting on a table a made a mistake and I inserted the number being a go string without converting it. Doing that test timed out with the next stacktrace:

testing.startAlarm.func1()
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:1145 +0xf9
created by time.goFunc
	/usr/local/Cellar/go/1.9.3/libexec/src/time/sleep.go:170 +0x44

goroutine 1 [chan receive, 1 minutes]:
testing.(*T).Run(0xc42017c0f0, 0x44c46d4, 0x24, 0x44d2f08, 0x407cb06)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:790 +0x2fc
testing.runTests.func1(0xc42017c0f0)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:1004 +0x64
testing.tRunner(0xc42017c0f0, 0xc42010bde0)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:746 +0xd0
testing.runTests(0xc420197060, 0x472aca0, 0xc, 0xc, 0x2)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:1002 +0x2d8
testing.(*M).Run(0xc420055f18, 0xc42010bf70)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:921 +0x111
main.main()
	wrkr-infix/subprocessor/_test/_testmain.go:66 +0xdb

goroutine 6 [chan receive, 1 minutes]:
database/sql.(*DB).connectionOpener(0xc42016f720)
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:871 +0x53
created by database/sql.Open
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:609 +0x1ee

goroutine 7 [IO wait]:
internal/poll.runtime_pollWait(0x4b9ef70, 0x72, 0x0)
	/usr/local/Cellar/go/1.9.3/libexec/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420178118, 0x72, 0xffffffffffffff00, 0x46f5340, 0x46f15b8)
	/usr/local/Cellar/go/1.9.3/libexec/src/internal/poll/fd_poll_runtime.go:85 +0xae
internal/poll.(*pollDesc).waitRead(0xc420178118, 0xc420608000, 0x400000, 0x400000)
	/usr/local/Cellar/go/1.9.3/libexec/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420178100, 0xc420608000, 0x400000, 0x400000, 0x0, 0x0, 0x0)
	/usr/local/Cellar/go/1.9.3/libexec/src/internal/poll/fd_unix.go:126 +0x18a
net.(*netFD).Read(0xc420178100, 0xc420608000, 0x400000, 0x400000, 0x4070265, 0x4, 0x3)
	/usr/local/Cellar/go/1.9.3/libexec/src/net/fd_unix.go:202 +0x52
net.(*conn).Read(0xc4201bc000, 0xc420608000, 0x400000, 0x400000, 0x0, 0x0, 0x0)
	/usr/local/Cellar/go/1.9.3/libexec/src/net/net.go:176 +0x6d
bufio.(*Reader).Read(0xc42017a0c0, 0xc4200ec550, 0x1, 0xa, 0xc420106f58, 0x4170952, 0xc420178100)
	/usr/local/Cellar/go/1.9.3/libexec/src/bufio/bufio.go:213 +0x30b
wrkr-infix/vendor/github.com/kshvakov/clickhouse.(*connect).Read(0xc42013a060, 0xc4200ec550, 0x1, 0xa, 0x41815bd, 0xc420178100, 0xc4201d6000)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/vendor/github.com/kshvakov/clickhouse/connect.go:95 +0x8e
wrkr-infix/vendor/github.com/kshvakov/clickhouse/lib/binary.(*Decoder).ReadByte(0xc4200ec540, 0x0, 0x0, 0x438d041)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/vendor/github.com/kshvakov/clickhouse/lib/binary/decoder.go:143 +0x54
encoding/binary.ReadUvarint(0x46f4200, 0xc4200ec540, 0x108, 0x1000, 0x108)
	/usr/local/Cellar/go/1.9.3/libexec/src/encoding/binary/varint.go:110 +0x7b
wrkr-infix/vendor/github.com/kshvakov/clickhouse/lib/binary.(*Decoder).Uvarint(0xc4200ec540, 0x44d2cd8, 0xc420107060, 0x0)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/vendor/github.com/kshvakov/clickhouse/lib/binary/decoder.go:29 +0x37
wrkr-infix/vendor/github.com/kshvakov/clickhouse.(*clickhouse).process(0xc42016c090, 0x0, 0x0)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/vendor/github.com/kshvakov/clickhouse/clickhouse.go:227 +0x44
wrkr-infix/vendor/github.com/kshvakov/clickhouse.(*clickhouse).Commit(0xc42016c090, 0x0, 0x0)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/vendor/github.com/kshvakov/clickhouse/clickhouse.go:149 +0x241
database/sql.(*Tx).Commit.func1()
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:1788 +0x3c
database/sql.withLock(0x46f5fc0, 0xc420102000, 0xc420107218)
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:2873 +0x65
database/sql.(*Tx).Commit(0xc420178000, 0x9, 0xbe9779e0228bc108)
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:1787 +0x119
wrkr-infix/subprocessor.TestDataUsageEventPerPacketProcessor(0xc42017c1e0)
	/Users/jsanz/Projects/goWorkspace/src/wrkr-infix/subprocessor/dataUsageEvent_test.go:41 +0x98f
testing.tRunner(0xc42017c1e0, 0x44d2f08)
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:746 +0xd0
created by testing.(*T).Run
	/usr/local/Cellar/go/1.9.3/libexec/src/testing/testing.go:789 +0x2de

goroutine 11 [select, 1 minutes]:
database/sql.(*Rows).awaitDone(0xc4200a6000, 0x46f87c0, 0xc42015a0c0, 0x0, 0x0)
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:2432 +0x107
created by database/sql.(*Rows).initContextClose
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:2420 +0x9c

goroutine 34 [chan receive]:
database/sql.(*Tx).awaitDone(0xc420178000)
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:1695 +0x4b
created by database/sql.(*DB).beginDC
	/usr/local/Cellar/go/1.9.3/libexec/src/database/sql/sql.go:1451 +0x216
FAIL	wrkr-infix/subprocessor	60.042s

When I converted the string to unit64 on go everything went smoothly. I am using version v1.3.1-1-g95e4a6a

Expose progress metadata

It would be very nice if your driver somehow could expose the progress metadata of a query. I saw in the code that it already handles the necessary part of the ClickHouse protocol (and logs the data when run in debug mode).

I am not sure how the API can expose the data but my first thought was to use a context.WithValue() to provide an unbuffered channel of progress structs to the query executing functions. The driver can then check if the context specified such a channel and attempt to send the progress update to it (but not block) whenever it receives an update from the server.

Example:

progressChan := make(chan clickhouse.Progress)

go func() {
    for progress := range progressChan {
        fmt.Printf("Progress: %v / %v rows, %v bytes.\n", progress.Rows, progress.TotalRows, progress.Bytes)
    }
}()

ctx := context.WithValue(context.Background(), clickhouse.ProgressKey, progressChan)
conn.QueryContext(ctx, "SELECT foo FROM bar")

How to insert UUID

How to insert UUID in standard representation 00000000-0000-0000-0000-000000000000 into column of type FixedString(16)?

I faced to curios behavior that UUIDStringToNum() does not works as expected in query

    INSERT INTO table (
        date, uuid
    ) VALUES (
        ?, UUIDStringToNum(?)
    );

Golang is as following

    _, err := stmt.Exec(time.Now(), 5d272320-e688-410e-bfba-03d43bb3099c)

Error message is column uuid (FixedString(16)): too large value.

Weird errors while selecting data

Table definition:

CREATE TABLE IF NOT EXISTS flamegraph (
                        timestamp Int64,
                        graph_type String,
                        cluster String,
                        id UInt64,
                        name String,
                        total UInt64,
                        value UInt64,
                        children_ids Array(UInt64),
                        date Date
                ) engine=MergeTree(date, (timestamp, graph_type, cluster, value, date), 8192)

Then I fill it out with data (several million queries).

Then I'm trying to select data from it from a go program:

        rows, err := connect.Query("SELECT total FROM flamegraph WHERE timestamp=" + ts + " AND id = " + idQuery + " AND cluster='" + cluster + "'")
	total := uint64(0)
	for rows.Next() {
		err = rows.Scan(&total)
		if err != nil {
			log.Fatal(err)
		}
	}

	minValue := uint64(float64(total) * removeLowestPct)
	minValueQuery := strconv.FormatUint(minValue, 10)

	rows, err = connect.Query("SELECT timestamp, graph_type, cluster, id, name, total, value, children_ids FROM flamegraph WHERE timestamp=" + ts + " AND cluster='" + cluster + "' AND value > " + minValueQuery)
	if err != nil {
		log.Fatal(err)
	}

This results in the following queries:

SELECT timestamp, graph_type, cluster, id, name, total, value, children_ids FROM flamegraph WHERE timestamp=1490791477 AND cluster='example' AND value > 4778

But each time I do that I got the following output:

[clickhouse][receive packet] <- data: columns=8, rows=0
[clickhouse][receive packet] err: unhandled type abnormal
[clickhouse][stmt] close
[clickhouse][receive packet] <- data: columns=8, rows=0
[clickhouse][receive packet] err: unhandled type all
[clickhouse][stmt] close

It seems that 'all', 'abnormal', etc is one of the names from the select.

At this moment I was unable to create a minimal test case for that.

connection don't open after timeout when start ping

Steps to reproduce:

git clone https://gist.github.com/d56062af1332ff6322f74100dbcc8e63.git ./clickhouse-driver-timeout
cd ./clickhouse-driver-timeout
vagrant up --provision

error output

    bad_clickhouse_driver: [clickhouse][connect=2][connect] write error: write tcp 127.0.0.1:54556->127.0.0.1:9000: i/o timeout
    bad_clickhouse_driver: {"time":"2018-01-28T13:27:40Z","level":"warn","clickhouseHost":"localhost:9000","error":"driver: bad connection","message":"some Clickhouse hosts is down
"}
    bad_clickhouse_driver: {"time":"2018-01-28T13:27:40Z","level":"fatal","message":"All clickhouse hosts is Down"}
    bad_clickhouse_driver: exit status 1

error happens here
https://gist.github.com/Slach/d56062af1332ff6322f74100dbcc8e63#file-bad_ch_driver_close_connection-go-L99

i think, error is reached when read\write timeout exceeded

Memory optimisations

Our application that uses this driver spends a lot of CPU time on GC. Looking into pprof output, I see that most of the allocated objects are created in insert() code branch. It would be nice to reduce the number of allocated objects and to have some benchmarks for that in place.

Array support for ColumnWriter

Current ColumnWriter interface:

type ColumnWriter interface {
	WriteDate(c int, v time.Time) error
	WriteDateTime(c int, v time.Time) error
	WriteUInt8(c int, v uint8) error
	WriteUInt16(c int, v uint16) error
	WriteUInt32(c int, v uint32) error
	WriteUInt64(c int, v uint64) error
	WriteFloat32(c int, v float32) error
	WriteFloat64(c int, v float64) error
	WriteBytes(c int, v []byte) error
	WriteString(c int, v string) error
	WriteFixedString(c int, v []byte) error
}

I want to be able to write Array(T). So far I was able to force ClickHouse to segfault while attempting to do this myself: ClickHouse/ClickHouse#887

cc @vavrusa

Error parsing prepared SQL "field BETWEEN ? AND ?"

Use following code to reproduce

q :=  "SELECT MAX(postback_date) AS dateSince FROM appmetrika_postbacks_local WHERE app_id=? AND postback_date BETWEEN ? AND ?"
// dateSince and dateUntil it's a time.Time variables
// log it's a 	"github.com/rs/zerolog/log"
  rows, err := db.Query(q, dateSince, dateUntil, appId)
  if err != nil {
 log.Fatal().Str("q",q).Time("dateSince",dateSince).Time("dateUntil",dateUntil).Int("appId",appId).Err(err).Msg("PrepareDateRange Query Error")
  }

Expected result

query running

Got result

[clickhouse][connect=3][prepare] SELECT MAX(postback_date) AS dateSince FROM appmetrika_postbacks_local WHERE app_id=? AND postback_date BETWEEN ? AND ?
[clickhouse][connect=3][stmt] close

{"time":"2018-01-04T14:46:10Z","level":"fatal","q":"SELECT MAX(postback_date) AS dateSince FROM appmetrika_postbacks_local WHERE app_id=? AND postback_date BETWEEN ? AND ?","dateSince":"2017-10-18T00:00:00Z","dateUntil":"2017-11-17T23:59:59Z","appId":120429,"error":"sql: statement expects 1 inputs; got 3","message":"PrepareDateRange Query Error"}

Invalid memory address or nil pointer dereference

Hi. I've got such an error while saving data to clickhouse to Buffer table. (sorry for long lines).
I'm writing batch of 10K entries to the buffer. I guess that the size could be a cause of the problem but i'm not sure.

[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x458667]

goroutine 51 [running]:
landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/writebuffer.(*WriteBuffer).Write(0xc4a3a5a8e0, 0x0, 0x15, 0x15, 0x0, 0x0, 0x118b)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/writebuffer/buffer.go:31 +0x2f0
landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/binary.(*Encoder).String(0xc4a3a5a940, 0x0, 0x15, 0x1, 0x1)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/binary/encoder.go:109 +0xa9
landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/column.(*String).Write(0xc4b5b0f900, 0xc4a3a5a940, 0x767500, 0xc4aef0df90, 0x1, 0x0)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/column/string.go:20 +0x147
landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/column.(*Nullable).WriteNull(0xc48c61b540, 0xc4a3a5a920, 0xc4a3a5a940, 0x767500, 0xc4aef0df90, 0x0, 0x782980)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/column/nullable.go:63 +0xcc
landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/data.(*Block).AppendRow(0xc4a1254960, 0xc4a32ac780, 0xc, 0xc, 0x1, 0xc4a32ac780)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/lib/data/block.go:115 +0x290
landings-crawler/vendor/github.com/kshvakov/clickhouse.(*stmt).execContext(0xc490e646f0, 0xa40640, 0xc420016028, 0xc4a32ac780, 0xc, 0xc, 0x1016621b840, 0x0, 0xc4b68e5f60, 0x41143d)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/stmt.go:46 +0x6b
landings-crawler/vendor/github.com/kshvakov/clickhouse.(*stmt).ExecContext(0xc490e646f0, 0xa40640, 0xc420016028, 0xc4a32ae000, 0xc, 0xc, 0xc46621b858, 0xa3e4a8, 0x0, 0x65a0d6)
        /landings-crawler/vendor/github.com/kshvakov/clickhouse/stmt.go:40 +0x1ac
database/sql.ctxDriverStmtExec(0xa40640, 0xc420016028, 0xa40880, 0xc490e646f0, 0xc4a32ae000, 0xc, 0xc, 0xc, 0xc, 0x0, ...)
        /usr/local/go/src/database/sql/ctxutil.go:66 +0x22e
database/sql.resultFromStatement(0xa40640, 0xc420016028, 0xa3fec0, 0xc420434750, 0xc4b5b0fac0, 0xc4b68e63a0, 0xc, 0xc, 0x0, 0x0, ...)
        /usr/local/go/src/database/sql/sql.go:2109 +0x184
database/sql.(*Stmt).ExecContext(0xc454bf4480, 0xa40640, 0xc420016028, 0xc4b68e63a0, 0xc, 0xc, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/database/sql/sql.go:2085 +0x223
database/sql.(*Stmt).Exec(0xc454bf4480, 0xc4b68e63a0, 0xc, 0xc, 0xa3f2c0, 0xc4a6b85920, 0x0, 0x0)
        /usr/local/go/src/database/sql/sql.go:2097 +0x65
landings-crawler/internal/savers.(*BundleSaver).saveAdverts(0xc4204e0180, 0xc4b2ff6000, 0x2710, 0x2710, 0x0, 0x0)
        /landings-crawler/internal/savers/bundle_saver.go:308 +0x57b

Problem with dates

I have a table with a field named REQUEST_DATE, that it’s a Date.

When I query this table using clickhouse client:

SELECT REQUEST_DATE, COUNT()
FROM WBEDS_AVAIL_VIEW
GROUP BY REQUEST_DATE

┌─REQUEST_DATE─┬──COUNT()─┐
│ 2018-01-07 │ 76178366 │
│ 2018-01-08 │ 25202906 │
└──────────────┴──────────┘

When I try to get this data from go:

var data time.Time
var count int

rows, err := db.Query("SELECT REQUEST_DATE, COUNT() FROM DM_TRPLAN.WBEDS_AVAIL_VIEW GROUP BY REQUEST_DATE")
if err != nil { return err }
defer rows.Close() }

for rows.Next() {
rows.Scan(&data, &count)
log.Printf("data: %v, count: %v", data, count)
}

This is the output:
2018/01/05 00:55:13 data: 2018-01-06 16:00:00 -0800 PST, count: 76178366
2018/01/05 00:55:13 data: 2018-01-07 16:00:00 -0800 PST, count: 25202906

Since this server is located nearby San Francisco, it’s taking the date and converting it to pacific time (PST).

Is there anyway to get the date as it is in the DDBB?

Подсчет ''?" в INSERT после Prepare()

Уже после того, как клиент получил ответ на Prepare() от сервера и знает, сколько нужно аргументов на вставку одной строки, он все-равно продолжает проверять совпадение количества переданных в Exec() аргументов и знаков "?" в строке INSERT. Это сбивает с толку, когда при передаче совпадающего количества аргументов он то требует количество, равное количеству полей в строке,
Can't execute statment (INSERT INTO events (uid, pid, event, geodata, created_at) FORMAT VALUES (?,?,?,?,?),(?,?,?,?,?),(?,?,?,?,?),(?,?,?,?,?)): block: expected 5 arguments (columns: uid, pid, event, geodata, created_at), got 20
а при не совпадающем, количество, равное числу "?" в запросе
Can't execute statment (INSERT INTO events (uid, pid, event, geodata, created_at) FORMAT VALUES (?,?,?,?,?),(?,?,?,?,?),(?,?,?,?,?),(?,?,?,?,?)): sql: expected 20 arguments, got 4

Support for nested structures in tables

I have a table with a nested structure:

CREATE TABLE DM_AVAIL
(
ID UInt64,
REQUEST_DATE Date,
CLIENT_ID UInt32,
REQUEST_ELEMENTS Nested
(
ELEM_ID UInt32, " +
ELEM_TYPE Enum8 ('C'=0,'A'=1,'Z'=2,'P'=3,'T'=4,H'=5,'R'=6)
)
) ENGINE = MergeTree PARTITION BY REQUEST_DATE ORDER BY CLIENT_ID

I want to insert data into this table by using a prepared statement:

st1, err1 := tx.Prepare("INSERT INTO DM_AVAIL VALUES (?,?,?, [????????])")

And then:
params := ………
if _, err := stmt[1].Exec(params); err != nil { return err }

Also we need to get nested data in queries.

Concurrent SELECT using one DB instance results in panic

When using one DB instance from database/sql and not using transactions, concurrent selects result in panic while retrieving data from rows. Test case included (if you insert only one row into test table instead of 2, problem won't reproduce).

package main

import (
	"database/sql"
	"os"
	"os/signal"
	"syscall"
	"time"

	_ "github.com/kshvakov/clickhouse"
)

func main() {
	conn, err := sql.Open("clickhouse", "tcp://localhost:9000?compress=true&debug=true")
	if err != nil {
		panic(err)
	}

	PrepareData(conn)
	go SelectUI64(conn)
	go SelectStr(conn)

	done := make(chan os.Signal, 1)
	signal.Notify(done, syscall.SIGINT, syscall.SIGTERM)
	<-done
}

func PrepareData(conn *sql.DB) {
	if _, err := conn.Exec("DROP TABLE IF EXISTS bug;"); err != nil {
		panic(err)
	}

	if _, err := conn.Exec("CREATE TABLE bug(part Date, ui64 UInt64, str String) ENGINE = MergeTree(part, (part), 8192);"); err != nil {
		panic(err)
	}

	tx, err := conn.Begin()
	if err != nil {
		panic(err)
	}

	stmt, err := tx.Prepare("INSERT INTO bug(part, ui64, str) VALUES (@part, @ui64, @str)")
	if err != nil {
		panic(err)
	}

	for i := 0; i < 2; i++ {
		if _, err = stmt.Exec(
			sql.Named("part", time.Now()),
			sql.Named("ui64", 42),
			sql.Named("str", "string"),
		); err != nil {
			panic(err)
		}
	}

	if err = tx.Commit(); err != nil {
		panic(err)
	}
}

func SelectUI64(conn *sql.DB) {
	for {
		stmt, err := conn.Prepare("SELECT ui64 FROM bug")
		if err != nil {
			panic(err)
		}

		rows, err := stmt.Query()
		if err != nil {
			panic(err)
		}

		if rows.Next() {
			var ui64 uint64
			if err = rows.Scan(&ui64); err != nil {
				panic(err)
			}
		}

		rows.Close()
	}
}

func SelectStr(conn *sql.DB) {
	for {
		stmt, err := conn.Prepare("SELECT str FROM bug")
		if err != nil {
			panic(err)
		}

		rows, err := stmt.Query()
		if err != nil {
			panic(err)
		}

		if rows.Next() {
			var str string
			if err = rows.Scan(&str); err != nil {
				panic(err)
			}
		}

		rows.Close()
	}
}

Prepare and Nullable type.

Hello!
When I try insert using tx.Prepare in query with Nullable columns, I have this error:

code: 50, message: Unknown type

If you try change prepare in clickhouse_nullable test` in inserting of multiple rows, you will see this bug.

Thanks.

error when you go get

....\kshvakov\clickhouse\clickhouse_read_block.go:12: ch.readTimeout undefined (type *clickhouse has no field or method readTimeout)
....\kshvakov\clickhouse\clickhouse_read_block.go:13: ch.writeTimeout undefined (type *clickhouse has no field or method writeTimeout)
....\kshvakov\clickhouse\clickhouse_read_block.go:15: ch.ServerInfo undefined (type *clickhouse has no field or method ServerInfo)
....\kshvakov\clickhouse\clickhouse_read_block.go:16: ch.decoder undefined (type *clickhouse has no field or method decoder)
....\kshvakov\clickhouse\clickhouse_read_block.go:20: ch.compress undefined (type *clickhouse has no field or method compress)
....\kshvakov\clickhouse\clickhouse_read_block.go:24: ch.ServerInfo undefined (type *clickhouse has no field or method ServerInfo)
....\kshvakov\clickhouse\clickhouse_read_block.go:24: ch.decoder undefined (type *clickhouse has no field or method decoder)
....\kshvakov\clickhouse\clickhouse_read_meta.go:12: ch.decoder undefined (type *clickhouse has no field or method decoder)
....\kshvakov\clickhouse\clickhouse_write_block.go:11: ch.Lock undefined (type *clickhouse has no field or method Lock)
....\kshvakov\clickhouse\clickhouse_write_block.go:12: ch.Unlock undefined (type *clickhouse has no field or method Unlock)
....\kshvakov\clickhouse\clickhouse_write_block.go:12: too many errors

Big uint64 in Go 1.8 and Go 1.9

In Go 1.8, ColumnConverter/ValueConverter is used to convert big uint64 into a slice of bytes: https://github.com/kshvakov/clickhouse/blob/d09d76760d7867fcbea3655d027de907017b5411/value_converter.go#L47-L53 I'm not entirely sure how it should work, but it does not work for me for query arguments. Do you have a test for that?

In Go 1.9, NamedValueChecker is used, and it works. I assume there is a bug in Go documentation? Because driver.Value by definition can't be uint64, and in CheckNamedValue you just pass it: https://github.com/kshvakov/clickhouse/blob/d09d76760d7867fcbea3655d027de907017b5411/clickhouse.go#L168-L169

The problem I see there that code compiles with both Go 1.8 and Go 1.9, and then fails in runtime if compiled with 1.8. I propose to either fix it for 1.8 or require 1.9+. The latter option will also allow you to remove some optional interfaces that are deprecated.

uint64 support

Currently when you are trying to insert uint64 value you get:

"sql: converting Exec argument $2 type: uint64 values with high bit set are not supported"

There is a possible workaround for that: golang/go#9373 (comment)

This driver should implement custom ConvertValue that can accept uint64's.

Round Robin load-balancing not working

I just tried the round-robin load balancing and found out that it is not scheduling connections according to the round robin algorithm

With this example DSN:

tcp://clickhouse-000:9000?username=&debug=true&alt_hosts=clickhouse-001:9000,clickhouse-002:9000

  1. Concurrent requests should be balanced

I just tried sending many concurrent requests and all requests get sent to clickhouse-000.

  1. Unhealthy connections should not be retried after an error
  • send a query, it hits clickhouse-000
  • bring down clickhouse-000
  • send a query, it hits clickhouse-000 and fails
  • send a query, it hits clickhouse-000, fails again
    ...

statement expects 0 inputs; got 1 while using sql.NamedArgs

On go 1.8 the sql package started to support NamedArgs.

I just tried to add

        rows, err = connect.Query("SELECT * FROM example WHERE os_id = @os_id", sql.Named("os_id", 10))
        if err != nil {
                log.Fatal(err)
        }

to the simple.go example and got the error

[clickhouse]host(s)=127.0.0.1:9000, database=default, username=default
[clickhouse][hello] -> Golang SQLDriver 1.1.54126
[clickhouse][hello] <- ClickHouse 1.1.54180 (Etc/UTC)
[clickhouse][prepare] SELECT * FROM example WHERE os_id = @os_id
[clickhouse][stmt] close
2017/03/22 15:27:22 sql: statement expects 0 inputs; got 1
$> go version
go version go1.8 linux/amd64

Question - Multiple Inserts within transaction

Hello @kshvakov I was wondering if there was a way to do multiple inserts into separate collections/tables within the same transation?

eg.

tx, err := connect.Begin()
...

stmt, err := tx.Prepare("INSERT INTO table1 ....")
_, err = stmt.Exec(...)
...

stmt, err := tx.Prepare("INSERT INTO table2 ....")
_, err = stmt.Exec(...)
...

err := tx.Commit()
...

on the second Prepare statement I get the error:

data request has already been prepared in transaction

thanks in advance for any help :)

Rollback upon error leaves connection in invalid state

Repro:

package main

import (
  "database/sql"
  "github.com/kshvakov/clickhouse"
  "log"
  "time"
)

func main() {
  connect, err := sql.Open("clickhouse", "tcp://127.0.0.1:9000?debug=true")
  _, err = connect.Exec(`
    CREATE TABLE IF NOT EXISTS example (
      country_code FixedString(2),
      os_id        UInt8,
      browser_id   UInt8,
      categories   Array(Int16),
      action_day   Date,
      action_time  DateTime
    ) engine=Memory
  `)

  if err != nil {
    log.Fatal(err)
  }
  var (
    tx, _   = connect.Begin()
    stmt, _ = tx.Prepare("INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)")
  )

  _, err = stmt.Exec(
    "RU",
    1.1,
    100,
    clickhouse.Array([]int16{1, 2, 3}),
    time.Now(),
    time.Now(),
  )

  if err != nil {
    tx.Rollback()
    log.Printf("exepcted error: %v", err)
  } else {
    log.Panic("OK?")
  }

  tx, err = connect.Begin()
  if err != nil {
    log.Fatal(err)
  }

  stmt, err = tx.Prepare("INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)")
  if err != nil {
    log.Fatal(err)
  }
  for i := 0; i < 100; i++ {
    if _, err := stmt.Exec(
      "RU",
      10+i,
      100+i,
      clickhouse.Array([]int16{1, 2, 3}),
      time.Now(),
      time.Now(),
    ); err != nil {
      log.Fatal(err)
    }
  }

  if err := tx.Commit(); err != nil {
    log.Fatal(err)
  }

Running the above results in a completely broken connection:

[clickhouse]host(s)=127.0.0.1:9000, database=default, username=default                                                                
[clickhouse][dial] secure=false, skip_verify=true, strategy=random, ident=1, server=0 -> 127.0.0.1:9000                               
[clickhouse][connect=1][hello] -> Golang SQLDriver 1.1.54213       
[clickhouse][connect=1][hello] <- ClickHouse 1.1.54343 (America/Los_Angeles)                                                          
[clickhouse][connect=1][prepare]                                   
                CREATE TABLE IF NOT EXISTS example (               
                        country_code FixedString(2),               
                        os_id        UInt8,                        
                        browser_id   UInt8,                        
                        categories   Array(Int16),                 
                        action_day   Date,                         
                        action_time  DateTime                      
                ) engine=Memory  

[clickhouse][connect=1][send query]                                
                CREATE TABLE IF NOT EXISTS example (               
                        country_code FixedString(2),               
                        os_id        UInt8,                        
                        browser_id   UInt8,                        
                        categories   Array(Int16),                 
                        action_day   Date,                         
                        action_time  DateTime                      
                ) engine=Memory  

[clickhouse][connect=1][process] <- end of stream                  
[clickhouse][connect=1][stmt] close                                
[clickhouse][connect=1][begin] tx=false, data=false                
[clickhouse][connect=1][prepare] INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)
[clickhouse][connect=1][send query] INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES 
[clickhouse][connect=1][read meta] <- data: packet=1, columns=6, rows=0                                                               
[clickhouse][connect=1][rollback] tx=true, data=true               
[clickhouse][connect=1][stmt] close                                
2018/02/13 21:40:23 exepcted error: os_id (UInt8): unexpected type float64                                                            
[clickhouse][connect=1][begin] tx=false, data=false                
[clickhouse][connect=1][prepare] INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES (?, ?, ?, ?, ?, ?)
[clickhouse][connect=1][send query] INSERT INTO example (country_code, os_id, browser_id, categories, action_day, action_time) VALUES 
[clickhouse][connect=1][read meta] <- exception                    
2018/02/13 21:40:23 code: 101, message: Unexpected packet Query received from client                                                  
exit status 1

Problem with long-time executed insert queries

I noticed some strange behaviour with long-time executed insert queries.
For example, query INSERT INTO Table2 SELECT * FROM Table1
Table2's engine is ReplicatedCollapsingMergeTree, Table1's engine is MergeTree.
If Table1 is not large, everything is ok.
If Table1 (as in my real case) is large enough (over 20 millions rows), I've got multiple insert (some rows have been inserted multiple times).
I found out that the problem is timeouts.
When I set timeouts (both for read and write) 3600 seconds, query works fine.
Maybe, it will be a good idea not use timeouts for queries like 'INSERT INTO... SELECT FROM...', because execution time may take a long time.
Also I noticed that, if I set parameters write_timeout=0 and read_timeout=0 query doesn't work at all.
So, it is impossible to set unlimited timeouts.

numInput returns wrong number if query contains '@'.

I run the following statement:

INSERT INTO `dbr_people` (`id`,`name`,`email`) VALUES (258,'jonathan','[email protected]')

Expected result: it completes successful.
Actual Result: it fails with error: sql: expected 1 arguments, got 0

It occurs because numInput counts @uservoice as variable and returns 1 as number of input parameters. :(

Support multiple insert

Statements like INSERT INTO table(a,b,c) VALUES (?,?,?), (?,?,?)... is currently not supported, I can try to fix it...

Panic when calling rs.ColumnTypes()

Hi,

A call to ColumnTypes() in a *sql.Rows returns this:

2017/12/27 23:01:15 http: panic serving 172.17.0.1:36840: runtime error: index out of range
goroutine 19 [running]:
net/http.(*conn).serve.func1(0xc42016e000)
/usr/local/go/src/net/http/server.go:1697 +0xd0
panic(0x702340, 0x919830)
/usr/local/go/src/runtime/panic.go:491 +0x283
github.com/kshvakov/clickhouse.(*rows).ColumnTypeScanType(0xc420178120, 0x0, 0xc420178120, 0x7f5e7c337398)
/go/src/github.com/kshvakov/clickhouse/rows.go:35 +0x71
database/sql.rowsColumnInfoSetup(0x8ee5e0, 0xc420178120, 0xc42016a130, 0x3, 0x3)
/usr/local/go/src/database/sql/sql.go:2635 +0x17b
database/sql.(*Rows).ColumnTypes(0xc42016a100, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/database/sql/sql.go:2563 +0x14f
query-service/utils.GetResultAsMap(0xc42016a100, 0x77b8b0, 0xc42016a100, 0x5, 0xc42016a100)
/go/src/query-service/utils/queries.go:187 +0x6e
query-service/utils.ExecuteQueryAndGetResultAsMap(0xc42016e3c0, 0xc420186000, 0x7664f9, 0x5, 0x0, 0x0, 0x0, 0x0)
/go/src/query-service/utils/queries.go:177 +0xc3

Joan.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.