Giter Site home page Giter Site logo

snowflake-jdbc's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snowflake-jdbc's Issues

SnowflakeChunkDownloader.currentMemoryUsage should measure the memory used cross entire JVM

SnowflakeChunkDownloader.currentMemoryUsage is used to control memory used by chunk downloading process. It is possible to have multiple instances of SnowflakeChunkDownloader working on different queries. Does it make sense this currentMemoryUsage should be referring to the memory used by all the instances so that we can put a cap on the memory consumption of JVM?

JDBC memory leak?

My query is hanging after lots of queries have been run (via a db stress-testing utility). The Snowflake server seems ok, so I'm guessing it's on the JDBC driver end.

I'm seeing the following in my app's console:

T1 waiting for 0.108s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 0.22s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 0.444s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 0.949s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 2.034s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 4.351s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 9.23s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 19.629s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 
T1 waiting for 30.856s: currentMemoryUsage in MB: 1517, needed: 32, nextD: 0, nextC: 0 

Any idea what this means and why it's occurring after running many queries?

I'm using version 3.6.27 of the driver, and rolled back to 3.6.19 and saw the same behavior.

Thanks.

How to set ResultSet to be ColumnCaseInsensitive

As SF treats all unquoted identifiers (e.g. Column) as case-insensitive, does it make sense to set SFSession.rsColumnCaseInsensitive to "true" by default?

Maybe it is controversial, but it should allow the application to change the default value.

Trying to change it using connection parameter like:

props.setProperty("JDBC_RS_COLUMN_CASE_INSENSITIVE", "TRUE");
Connection conn = DriverManager.getConnection(jdbcUrl, props)

But it does not seem to work.

Looking at the code, SessionUtil.updateSfDriverParamValues takes in connection parameters from login response and sets the session properties accordingly. It totally ignores the connection parameters passed in from getConnection() call. It makes more sense to merge the two sets of parameters and let use provided parameter override the same one from login response. Also, it is not clear to me how SFSession.sessionParametersMap is used.

-thanks

null Boolean parameter results in: Data type not supported for binding: -3.

snowflake-jdbc version: 3.6.28

Query using an input parameter for an optional filter. The error only occurs when checking if the optional filter input parameter is null. If it is not checked against null or the input is not null, everything works as expected. Example scenario:
select d from data_table d where ((:filter is null) or (:filter = true and name_field is not null) or (:filter = false and name_field is null))

Resulting error:

2019-03-21 08:44:05.290  WARN 10052 --- [qtp156855528-25] o.h.engine.jdbc.spi.SqlExceptionHelper   : SQL Error: 200018, SQLState: 0A000
2019-03-21 08:44:05.290 ERROR 10052 --- [qtp156855528-25] o.h.engine.jdbc.spi.SqlExceptionHelper   : Data type not supported for binding: -3.

Caused by: net.snowflake.client.jdbc.SnowflakeSQLException: Data type not supported for binding: -3.
	at net.snowflake.client.jdbc.SnowflakeType.javaTypeToSFType(SnowflakeType.java:250) ~[snowflake-jdbc-3.6.28.jar:3.6.28]
	at net.snowflake.client.jdbc.SnowflakeUtil.javaTypeToSFTypeString(SnowflakeUtil.java:262) ~[snowflake-jdbc-3.6.28.jar:3.6.28]
	at net.snowflake.client.jdbc.SnowflakePreparedStatementV1.setNull(SnowflakePreparedStatementV1.java:179) ~[snowflake-jdbc-3.6.28.jar:3.6.28]
	at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.setNull(HikariProxyPreparedStatement.java) ~[HikariCP-3.2.0.jar:na]

This same code/query works without error against H2 Test DB.

Unable to use Boolean parameters

Getting an error message when trying to use a java boolean parameter in a jdbc query:

using clojure JDBC:
(jdbc/query snowflake-db ["select count(1) as num_true where true = ?" true]))

Throws an exception with message:

net.snowflake.client.jdbc.SnowflakeSQLException: Data type not supported for binding: Object type: class java.lang.Boolean.

Looks like this function is missing a case to handle booleans:

public void setObject(int parameterIndex, Object x) throws SQLException

There is a setBoolean function, it's just not called from anywhere

Misleading error when the driver can't create or locate cache directory

We were trying to use snowflake jdbc JAR versioned 3.6.17 in Cent OS machine.

The actual root cause of the error was:

java.lang.ExceptionInInitializerError
	at net.snowflake.client.jdbc.SnowflakeConnectionV1.initSessionProperties(SnowflakeConnectionV1.java:245)
	at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:144)
	at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:331)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:120)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:126)
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:375)
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:204)
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:459)
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:533)
	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:114)
	at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:82)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.getHikariDataSource(JdbcDataSourceRegistryImpl.java:486)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.load(JdbcDataSourceRegistryImpl.java:421)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.load(JdbcDataSourceRegistryImpl.java:401)
	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323)
	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
	at com.google.common.cache.LocalCache.get(LocalCache.java:3953)
	at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3957)
	at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4875)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.getConnection(JdbcDataSourceRegistryImpl.java:148)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.connect(SnowflakeDatabaseAccount.java:181)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.validate(SnowflakeDatabaseAccount.java:162)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.validate(SnowflakeDatabaseAccount.java:50)
	at com.snaplogic.cc.util.AccountUtil.connect(AccountUtil.java:72)
	at com.snaplogic.cc.account.validate.AccountValidator.validate(AccountValidator.java:82)
	at com.snaplogic.cc.service.AccountServiceImpl.validate(AccountServiceImpl.java:98)
	at com.snaplogic.cc.jaxrs.resource.AccountResource.validate(AccountResource.java:82)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:376)
	at org.jboss.resteasy.core.SynchronousDispatcher.invokePropagateNotFound(SynchronousDispatcher.java:237)
	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:225)
	at org.jboss.resteasy.plugins.server.servlet.FilterDispatcher.doFilter(FilterDispatcher.java:62)
	at com.snaplogic.common.web.RequestFilter.access$001(RequestFilter.java:69)
	at com.snaplogic.common.web.RequestFilter$1.call(RequestFilter.java:172)
	at com.snaplogic.common.web.RequestFilter$1.call(RequestFilter.java:166)
	at com.snaplogic.common.metrics.RequestMetrics.timedCallback(RequestMetrics.java:63)
	at com.snaplogic.common.metrics.MetricsManager.injectMetrics(MetricsManager.java:86)
	at com.snaplogic.common.web.RequestFilter.doFilter(RequestFilter.java:166)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
	at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:198)
	at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:176)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:541)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1584)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1228)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:481)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1553)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1130)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:118)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
	at org.eclipse.jetty.server.Server.handle(Server.java:564)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:318)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:112)
	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
	at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:122)
	at org.eclipse.jetty.util.thread.strategy.ExecutingExecutionStrategy.invoke(ExecutingExecutionStrategy.java:58)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:201)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:133)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to locate or create the cache directory: /home/snapuser/.cache/snowflake
	at net.snowflake.client.core.FileCacheManager.build(FileCacheManager.java:147)
	at net.snowflake.client.core.SessionUtil.<clinit>(SessionUtil.java:191)
	... 81 more

But the error we were getting from our logs was:

java.lang.NoClassDefFoundError: Could not initialize class net.snowflake.client.core.SessionUtil
	at net.snowflake.client.jdbc.SnowflakeConnectionV1.initSessionProperties(SnowflakeConnectionV1.java:245)
	at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:144)
	at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:331)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:120)
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:126)
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:375)
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:204)
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:459)
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:533)
	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:114)
	at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:82)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.getHikariDataSource(JdbcDataSourceRegistryImpl.java:486)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.load(JdbcDataSourceRegistryImpl.java:421)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl$HikariDataSourceLoader.load(JdbcDataSourceRegistryImpl.java:401)
	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323)
	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
	at com.google.common.cache.LocalCache.get(LocalCache.java:3953)
	at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3957)
	at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4875)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.getConnection(JdbcDataSourceRegistryImpl.java:148)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.retryGetConnection(JdbcDataSourceRegistryImpl.java:239)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.getConnection(JdbcDataSourceRegistryImpl.java:194)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.retryGetConnection(JdbcDataSourceRegistryImpl.java:239)
	at com.snaplogic.snap.api.sql.JdbcDataSourceRegistryImpl.getConnection(JdbcDataSourceRegistryImpl.java:194)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.connect(SnowflakeDatabaseAccount.java:181)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.validate(SnowflakeDatabaseAccount.java:162)
	at com.snaplogic.snap.api.sql.accounts.SnowflakeDatabaseAccount.validate(SnowflakeDatabaseAccount.java:50)
	at com.snaplogic.cc.util.AccountUtil.connect(AccountUtil.java:72)
	at com.snaplogic.cc.account.validate.AccountValidator.validate(AccountValidator.java:82)
	at com.snaplogic.cc.service.AccountServiceImpl.validate(AccountServiceImpl.java:98)
	at com.snaplogic.cc.jaxrs.resource.AccountResource.validate(AccountResource.java:82)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:376)
	at org.jboss.resteasy.core.SynchronousDispatcher.invokePropagateNotFound(SynchronousDispatcher.java:237)
	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:225)
	at org.jboss.resteasy.plugins.server.servlet.FilterDispatcher.doFilter(FilterDispatcher.java:62)
	at com.snaplogic.common.web.RequestFilter.access$001(RequestFilter.java:69)
	at com.snaplogic.common.web.RequestFilter$1.call(RequestFilter.java:172)
	at com.snaplogic.common.web.RequestFilter$1.call(RequestFilter.java:166)
	at com.snaplogic.common.metrics.RequestMetrics.timedCallback(RequestMetrics.java:63)
	at com.snaplogic.common.metrics.MetricsManager.injectMetrics(MetricsManager.java:86)
	at com.snaplogic.common.web.RequestFilter.doFilter(RequestFilter.java:166)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
	at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:198)
	at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:176)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1621)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:541)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1584)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1228)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:481)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1553)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1130)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:118)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
	at org.eclipse.jetty.server.Server.handle(Server.java:564)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:318)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:112)
	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
	at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:122)
	at org.eclipse.jetty.util.thread.strategy.ExecutingExecutionStrategy.invoke(ExecutingExecutionStrategy.java:58)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:201)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:133)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
	at java.lang.Thread.run(Thread.java:745)

I had to customize the snowflake jdbc code to get the actual root cause. I suggest the snowflake jdbc code shouldn't suppress the actual errors.

Error encountered when downloading a result chunk

Hi,

I am using "snowflake-jdbc-3.6.8.jar" and when I try to send a "select *" query against Snowflake on a big table I am getting "net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver internal error: JDBC driver encountered communication error. Message: Error encountered when downloading a result chunk: HTTP status=404.., queue size before clearing it out: 1".

I understand this is a chunk download timeout. Can I somehow increase this timeout ?

BTW, I have noticed that the guys from .Net had the same issue:
#snowflakedb/snowflake-connector-net#9

Thanks,
Or.

Error converting Numeric(20,0) to Long

Hello,
I'm working with some tables that has a Numeric(20,0) field and when trying to fetch a row with a number greater than the max value for Long numbers I'm getting the following error:

Cannot convert value in the driver from type:-5 to type:LONG, value=9999999999999999999. [SQL State=0A000, DB Errorcode=200038]

To create and query the information I'm currently using SQLWorkbench and the jdbc snowflake driver.
Here's an example of how to reproduce the forementioned issue:

CREATE TABLE "Schema"."LONG_ISSUE" (
  "numeric" NUMERIC(20,0)
);

insert into "Schema"."LONG_ISSUE" ("numeric") values (9999999999999999999);

select * from "Schema"."LONG_ISSUE";

Multi statement execution

I see that SnowflakeStatementV1 class has the required methods(addBatch, executeBatch) to provide the required functionality for multi statement execution. However, I see that this class has protected access. Is there any plan to make it public so that the sql statement can be casted to SnowflakeStatementV1?

ArrayUtils not included in package

When trying to connect with the Snowflake JDBC driver, I get the following exception java.lang.NoClassDefFoundError: net/snowflake/client/jdbc/internal/apache/commons/lang3/ArrayUtils

Can be fixed in pom.xml by also including ArrayUtils when including org.apache.commons:commons-lang3 to self-contained-jar.

slf4j configuration is broken due to shade relocation

It seems slf4j is broken due to shade relocation process.

Trying to use logback with slf4j and always get following error:

    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Looking into the cause, it is due to shade relocation process changed this line:

net.snowflake.client.jdbc.internal.org.slf4j.LoggerFactory

L47:   private static String STATIC_LOGGER_BINDER_PATH = 
  "net/snowflake/client/jdbc/internal/org/slf4j/impl/StaticLoggerBinder.class";

This should be "org/slf4j/impl/StaticLoggerBinder.class" to match the slf4j's init process.

Configuration of on_error Copy option

According to snowflake docs, copy command supports the following list of values for on_error option: https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-table.html#copy-options

Currently on_error 'continue' is stored in copy command and can't be configured:
"COPY INTO \"" + stage.getId() + "\" FROM '" + remoteStage + "' on_error='continue'"
https://github.com/snowflakedb/snowflake-jdbc/blob/master/src/main/java/net/snowflake/client/loader/ProcessQueue.java#L118

Please add support for passing this value as a parameter.

COPY INTO file format type configuration

Hello,

According to documentation there are several file format types available in Snowflake:
CSV | JSON | AVRO | ORC | PARQUET | XML
https://docs.snowflake.net/manuals/sql-reference/sql/create-file-format.html
In COPY INTO command format value is not included and default CSV is used: https://github.com/snowflakedb/snowflake-jdbc/blob/master/src/main/java/net/snowflake/client/loader/ProcessQueue.java#L115
Please provide the ability to configure the file format on user side.
The problem case is when uploading large json record to the table seems like csv fromat has some internal size limitations and rejects the reocrd with an error althoug with json format works fine.

Turn off prepare() on certain prepared statements

We have observed a lot of our queries cannot be "prepared". It will be nice to have a config parameter or a SF-specific query hint to turn off the prepare() call if we know a query will never be successfully prepared.

Named transactions

Is there a way to create a named transaction? E.g. what one can achieve by using

BEGIN TRANSACTION name foo;

?

Cannot Execute `SHOW TRANSACTIONS` Statement

Hi,

I'm having some problems executing a SQL statement in the newest version of this driver. For some context, I'm running the SnowflakeDriver through Clojure's clojure.java.jdbc library. Despite my best attempts, all code paths lead me to an error - either because of a PreparedStatement or a lack of updates:

Before updating to version 3.3.2, I was using version 3.2.4 and this code path was working with clojure.java.jdbc/query. After upgrading, I can't seem to find a path that works.

Calling clojure.java.jdbc/query:

net.snowflake.client.jdbc.SnowflakeSQLException, "Statement provided can not be prepared." (SQLState 0A000, error code 7)
net.snowflake.client.jdbc.SnowflakeSQLException: Statement provided can not be prepared.
        at net.snowflake.client.jdbc.SnowflakeUtil.checkErrorAndThrowException(SnowflakeUtil.java:102)
        at net.snowflake.client.core.StmtUtil.execute(StmtUtil.java:410)
        at net.snowflake.client.core.SFStatement.executeHelper(SFStatement.java:371)
        at net.snowflake.client.core.SFStatement.executeQueryInternal(SFStatement.java:195)
        at net.snowflake.client.core.SFStatement.executeQuery(SFStatement.java:147)
        at net.snowflake.client.core.SFStatement.describe(SFStatement.java:160)
        at net.snowflake.client.jdbc.SnowflakePreparedStatementV1.<init>(SnowflakePreparedStatementV1.java:105)
        at net.snowflake.client.jdbc.SnowflakeConnectionV1.prepareStatement(SnowflakeConnectionV1.java:1104)
        at clojure.java.jdbc$prepare_statement.invoke(jdbc.clj:495)
        at clojure.java.jdbc$db_query_with_resultset.invoke(jdbc.clj:846)
        at clojure.java.jdbc$query.invoke(jdbc.clj:874)
        at clojure.java.jdbc$query.invoke(jdbc.clj:867)

Calling clojure.java.jdbc/execute!:

net.snowflake.client.jdbc.SnowflakeSQLException: Statement provided can not be prepared.
        at net.snowflake.client.jdbc.SnowflakeUtil.checkErrorAndThrowException(SnowflakeUtil.java:102)
        at net.snowflake.client.core.StmtUtil.execute(StmtUtil.java:410)
        at net.snowflake.client.core.SFStatement.executeHelper(SFStatement.java:371)
        at net.snowflake.client.core.SFStatement.executeQueryInternal(SFStatement.java:195)
        at net.snowflake.client.core.SFStatement.executeQuery(SFStatement.java:147)
        at net.snowflake.client.core.SFStatement.describe(SFStatement.java:160)
        at net.snowflake.client.jdbc.SnowflakePreparedStatementV1.<init>(SnowflakePreparedStatementV1.java:105)
        at net.snowflake.client.jdbc.SnowflakeConnectionV1.prepareStatement(SnowflakeConnectionV1.java:1104)
        at clojure.java.jdbc$prepare_statement.invoke(jdbc.clj:495)
        at clojure.java.jdbc$db_do_prepared.invoke(jdbc.clj:813)
        at clojure.java.jdbc$execute_BANG_$execute_helper__14426.invoke(jdbc.clj:961)
        at clojure.java.jdbc$execute_BANG_.invoke(jdbc.clj:963)
        at clojure.java.jdbc$execute_BANG_.invoke(jdbc.clj:954)

Calling clojure.java.jdbc/db-do-command:

java.sql.BatchUpdateException, "Statement 'SHOW TRANSACTIONS' cannot be executed using current API." (SQLState 0A000, error code 200042); net.snowflake.client.jdbc.SnowflakeSQLException
, "Statement 'SHOW TRANSACTIONS' cannot be executed using current API." (SQLState 0A000, error code 200042)
java.sql.BatchUpdateException: Statement 'SHOW TRANSACTIONS' cannot be executed using current API.
        at net.snowflake.client.jdbc.SnowflakeStatementV1.executeBatchInternal(SnowflakeStatementV1.java:331)
        at net.snowflake.client.jdbc.SnowflakeStatementV1.executeBatch(SnowflakeStatementV1.java:290)
        at clojure.java.jdbc$execute_batch.invoke(jdbc.clj:426)
        at clojure.java.jdbc$db_do_commands$fn__14366.invoke(jdbc.clj:721)
        at clojure.java.jdbc$db_transaction_STAR_.invoke(jdbc.clj:613)
        at clojure.java.jdbc$db_transaction_STAR_.invoke(jdbc.clj:598)
        at clojure.java.jdbc$db_do_commands.invoke(jdbc.clj:720)
        at clojure.java.jdbc$db_do_commands.invoke(jdbc.clj:711)

Can you help me figure out how I can upgrade my driver and execute this query?

getTableTypes reports wrong table types

Snowflake has table types such BASE TABLE, EXTERNAL TABLE and VIEW. The JDBC driver tells us otherwise.

public ResultSet getTableTypes() throws SQLException
  {
    logger.debug("public ResultSet getTableTypes()");

    Statement statement = connection.createStatement();

    // TODO: We should really get the list of table types from GS
    return new SnowflakeDatabaseMetaDataResultSet(
            Arrays.asList("TABLE_TYPE"),
            Arrays.asList("TEXT"),
            Arrays.asList(Types.VARCHAR),
            new Object[][]
            {
              {
                "TABLE"
              },
              {
                "VIEW"
              }

JDBC Database Metadata incorrect

Using Snowflake JDBC 3.2.7 and the Squirrel SQL client as an example of general JDBC metadata access.

Schemas, tables and columns are duplicated.

snowflake jdbc squirrel-1

snowflake jdbc squirrel-2

Cannot connect to snowflake from Elastic Beanstalk instance, but works locally.

I have a local setup that connects to snowflake just fine from a local development machine but once I've pushed the code to elastic beanstalk I get the following stack

Exception in thread "C3P0PooledConnectionPoolManager[identityToken->1br2rud9y6t5ufxo841tq|198d594d]-HelperThread-#2" Exception in thread "C3P0PooledConnectionPoolManager[identityToken->1br2rud9y6t5ufxo841tq|198d594d]-HelperThread-#0" java.lang.NoClassDefFoundError: Could not initialize class net.snowflake.client.core.SFTrustManager
        at net.snowflake.client.core.HttpUtil.buildHttpClient(HttpUtil.java:102)
        at net.snowflake.client.core.HttpUtil.initHttpClient(HttpUtil.java:172)
        at net.snowflake.client.core.SFSession.open(SFSession.java:353)
        at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:143)
        at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:333)
        at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
        at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
        at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
        at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
        at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
java.lang.ExceptionInInitializerError
        at net.snowflake.client.core.HttpUtil.buildHttpClient(HttpUtil.java:102)
        at net.snowflake.client.core.HttpUtil.initHttpClient(HttpUtil.java:172)
        at net.snowflake.client.core.SFSession.open(SFSession.java:353)
        at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:143)
        at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:333)
        at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
        at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
        at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
        at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
        at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
Caused by: java.lang.RuntimeException: Failed to touch the cache file: /usr/share/tomcat8/.cache/snowflake/ocsp_response_cache.json
        at net.snowflake.client.core.FileCacheManager.build(FileCacheManager.java:168)
        at net.snowflake.client.core.SFTrustManager.<clinit>(SFTrustManager.java:163)
        ... 14 more
Exception in thread "C3P0PooledConnectionPoolManager[identityToken->1br2rud9y6t5ufxo841tq|198d594d]-HelperThread-#1" java.lang.NoClassDefFoundError: Could not initialize class net.snowflake.client.core.SFTrustManager
        at net.snowflake.client.core.HttpUtil.buildHttpClient(HttpUtil.java:102)
        at net.snowflake.client.core.HttpUtil.initHttpClient(HttpUtil.java:172)
        at net.snowflake.client.core.SFSession.open(SFSession.java:353)
        at net.snowflake.client.jdbc.SnowflakeConnectionV1.<init>(SnowflakeConnectionV1.java:143)
        at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:333)
        at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
        at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
        at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
        at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
        at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
        at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
        at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)

I've tried updating to the latest version of snowflake-jdbc (3.6.12) but its still not working. Is there something I need to setup on elastic beanstalk to get this working?

Please advise on what needs to be done.

Is the Connection object thread-safe?

We'd like to:

  • start a transaction on a connection (connection.setAutoCommit(false))
  • issue several statements, each on its own thread, reusing the same java.sql.Connection
  • commit a transaction (connection.commit())

Can we do that and rely on it to be thread-safe?

[Loader]: Not dropping unselected columns after creating temp table.

https://github.com/snowflakedb/snowflake-jdbc/blob/master/src/main/java/net/snowflake/client/loader/ProcessQueue.java#L135

For below scenario:

allColumns : MYSTRING, MYTEXT, MYDATETIME, MYTIMESTAMP, MYTIMESTAMP_LTZ,
[List]
selectedColumns: "MYSTRING","MYTEXT","MYTIMESTAMP_LTZ"
[String]

so selectedColumns.contains(col) will be true for input MYTIMESTAMP, but i.e not expected here.

Expected: Use List instead of string(getColumnsAsString) for comparison as well.

Data type not supported for binding: 2005

Hello Team,

I am trying to copy data from teradata to snowflake by using net.snowflake.client.jdbc.SnowflakeDriver. When I do it I am getting the below error
Executor: Exception in task 0.0 in stage 1.0 (TID 1) net.snowflake.client.jdbc.SnowflakeSQLException: Data type not supported for binding: 2,005. at net.snowflake.client.jdbc.SnowflakeType.javaTypeToSFType(SnowflakeType.java:255)

When I debuged it the coulmn which was causing this issue is varchar column and it is nullable.
outputDataset.write.mode(SaveMode.Append).jdbc(url = target.fullURL, table = tableName, connectionProperties = target.jdbcProperties)

internally it call's the JdbcUtils.saveTable function in which the following variable is getting set.
val nullTypes: Array[Int] = df.schema.fields.map { field => getJdbcType(field.dataType, dialect).jdbcNullType }
In this all the varchar are consider as Text and it is converted to java.sql.Types.CLOB

def getCommonJDBCType(dt: DataType): Option[JdbcType] = { dt match { case IntegerType => Option(JdbcType("INTEGER", java.sql.Types.INTEGER)) case LongType => Option(JdbcType("BIGINT", java.sql.Types.BIGINT)) case DoubleType => Option(JdbcType("DOUBLE PRECISION", java.sql.Types.DOUBLE)) case FloatType => Option(JdbcType("REAL", java.sql.Types.FLOAT)) case ShortType => Option(JdbcType("INTEGER", java.sql.Types.SMALLINT)) case ByteType => Option(JdbcType("BYTE", java.sql.Types.TINYINT)) case BooleanType => Option(JdbcType("BIT(1)", java.sql.Types.BIT)) case StringType => Option(JdbcType("TEXT", java.sql.Types.CLOB)) case BinaryType => Option(JdbcType("BLOB", java.sql.Types.BLOB)) case TimestampType => Option(JdbcType("TIMESTAMP", java.sql.Types.TIMESTAMP)) case DateType => Option(JdbcType("DATE", java.sql.Types.DATE)) case t: DecimalType => Option( JdbcType(s"DECIMAL(${t.precision},${t.scale})", java.sql.Types.DECIMAL)) case _ => None } }

the value for java.sql.Types.CLOB is 2005 public final static int CLOB = 2005;

But in the folloing method in class SnowflakePreparedStatementV1 do not support the data type CLOB.

`@Override
public void setNull(int parameterIndex, int sqlType) throws SQLException
{
logger.log(Level.FINER,
"setNull(int parameterIndex, int sqlType) throws SQLException");

Map<String, Object> binding = new HashMap<String, Object>();
binding.put("value", null);
binding.put("type", SnowflakeUtil.javaTypeToSFType(sqlType));
parameterBindings.put(String.valueOf(parameterIndex), binding);

}`

`public static SnowflakeType javaTypeToSFType(int javaType) throws SnowflakeSQLException
{

switch (javaType)
{
  case Types.INTEGER:
  case Types.BIGINT:
  case Types.DECIMAL:
  case Types.NUMERIC:
  case Types.SMALLINT:
  case Types.TINYINT:
    return FIXED;

  case Types.CHAR:
  case Types.VARCHAR:
    return TEXT;

  case Types.BINARY:
    return BINARY;

  case Types.FLOAT:
  case Types.DOUBLE:
    return REAL;

  case Types.DATE:
    return DATE;

  case Types.TIME:
    return TIME;

  case Types.TIMESTAMP:
    return TIMESTAMP;
  
  case Types.BOOLEAN:
    return BOOLEAN;

  case Types.NULL:
    return ANY;

  default:
    throw new SnowflakeSQLException(SqlState.FEATURE_NOT_SUPPORTED,
                                    ErrorCode.DATA_TYPE_NOT_SUPPORTED
                                    .getMessageCode(),
                                    javaType);
}

}`

JDBC driver does not close result sets when connection is closed

The JDBC documentation states that Connection.close()

Releases this Connection object's database and JDBC resources immediately instead of waiting for them to be automatically released.

See https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#close--

The Snowflake JDBC driver does correctly close a connection's statements, but does NOT close their result sets.

To reproduce, take the Snowflake JDBC sample app and change the end of main() to read:

// resultSet.close();
// statement.close();
connection.close();

System.out.println("Is resultSet closed? " + resultSet.isClosed());
System.out.println("Is statement closed? " + statement.isClosed());

Output is:

Is resultSet closed? false
Is statement closed? true

Output should be:

Is resultSet closed? true
Is statement closed? true

Can't parse '2018-10-01 06:00:00.0' as timestamp with format 'YYYY-MM-DD HH24:MI:SS.FF9 TZH:TZM'

After SNOW-61862 we cannot inserts data into snowflake using IBM datastage

passing jdbc attribute
TIMESTAMP_TYPE_MAPPING=TIMESTAMP_NTZ

and setting sessions paramaters
ALTER SESSION SET CLIENT_TIMESTAMP_TYPE_MAPPING = TIMESTAMP_NTZ;
ALTER SESSION SET TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9';

do not help.

The connector encountered a Java exception:
net.snowflake.client.jdbc.SnowflakeSQLException: Can't parse '2018-10-01 06:00:00.0' as timestamp with format 'YYYY-MM-DD HH24:MI:SS.FF9 TZH:TZM'
at net.snowflake.client.jdbc.SnowflakeUtil.checkErrorAndThrowExceptionSub(SnowflakeUtil.java:137)
at net.snowflake.client.jdbc.SnowflakeUtil.checkErrorAndThrowException(SnowflakeUtil.java:62)
at net.snowflake.client.core.StmtUtil.pollForOutput(StmtUtil.java:501)
at net.snowflake.client.core.StmtUtil.execute(StmtUtil.java:368)
at net.snowflake.client.core.SFStatement.executeHelper(SFStatement.java:478)
at net.snowflake.client.core.SFStatement.executeQueryInternal(SFStatement.java:234)
at net.snowflake.client.core.SFStatement.executeQuery(SFStatement.java:173)
at net.snowflake.client.core.SFStatement.execute(SFStatement.java:670)
at net.snowflake.client.jdbc.SnowflakeStatementV1.executeUpdateInternal(SnowflakeStatementV1.java:123)
at net.snowflake.client.jdbc.SnowflakePreparedStatementV1.executeBatch(SnowflakePreparedStatementV1.java:941)
at com.ibm.is.cc.snowflake.SnowflakeRecordDataSetConsumer.executeBatchStatements(SnowflakeRecordDataSetConsumer.java:3474)
at com.ibm.is.cc.snowflake.SnowflakeRecordDataSetConsumer.executeStatements(SnowflakeRecordDataSetConsumer.java:2823)
at com.ibm.is.cc.snowflake.SnowflakeBigBufferRecordDataSetConsumer.consumeBigBuffer(SnowflakeBigBufferRecordDataSetConsumer.java:713)

Connection timeout when fetching result sets

Select * from table : Fetches first chunk of data (600 records) and then I get "Connection timeout" while fetching the next chunk of data

If I do, Select * from table limit 1200, it works fine with out any timeouts
So, broke-down the whole thing into 2 steps..
rowcount = select count(*) from table
Select * from table limit rowcount

Any idea why we are getting timeout while fetching relatively small datasets?
I am using snowflake-jdbc version: 3.6.15

SF_OCSP_RESPONSE_CACHE_DIR cannot be set via connection string

According to the Snowflake documentation, any session parameter should be able to be set via connection string. However, a search for SF_OCSP_RESPONSE_CACHE_DIR in this repository shows that it is not parsed from the connection string. It is only read from the environment in SFTrustManager.java.

I would expect it'd be parsed in updateSfDriverParamValues in SessionUtil.java, where other session parameters are set (such as CLIENT_SESSION_KEEP_ALIVE). This would be an incredibly useful feature to have, and highly preferable over setting an environment variable. Thanks!

Enable compression for the query results payload.

When I profile my application, it appears that almost 100% of my app time is spent downloading the rather large json files from blob storage which contain the result rows from my query. Do these need to be plain text? Could an option be added to compress them? JSON payloads are very bloated and benefit a lot from compression.

Even on my corporate LAN the query time might be 100 ms but my results might take a minute or more to be made available.

This could be problematic for streaming of course - perhaps you could batch them together a few thousand rows at a time?

Bug related to snowflake changes the column name to capital case and jdbc driver doesn't support it

Hi,

In snowflake if I fire any sql with an alias or without alias the result column names are automatically converted to capital case.

Due to the above capitalization, this method will throw the exception for the column name which is correct from java side

`@Override
public int findColumn(String columnLabel) throws SQLException
{
logger.debug(
"public int findColumn(String columnLabel)");

int columnIndex = resultSetMetaData.getColumnIndex(columnLabel);

if (columnIndex == -1)
{
  throw new SQLException("Column not found: " + columnLabel);
}
else
{
  return ++columnIndex;
}

}
`

HTTP Connection get closed after every query

Looks like HTTP connections are not reusable right now. According to the following code you call httpRequest.releaseConnection method for every query. Which under the hood calling ConnectionHolder.abortConnection which is simply closing connection even if it is marked as reusable. To avoid this the http request should be marked as completed e.g. httpRequest.completed() after a response was consumed (e.g here), this will reset cancellableRef so that connection will not be closed.

Here is a pcap file of two queries followed one by one. Here you can see that connections were closed after each query.

Travis builds for PRs are failing with openssl error

Builds for PRs are failing with this error.

The command "openssl aes-256-cbc -k "$super_secret_password" -in parameters.json.enc -out parameters.json -d" failed and exited with 1 during .

As per travis docs:

Encrypted environment variables are not available to pull requests from forks due to the security risk of exposing such information to unknown code.

This means contributors cannot see if their code passes the tests on CI and makes it difficult for people to contribute.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.