Giter Site home page Giter Site logo

qihoo360 / xsql Goto Github PK

View Code? Open in Web Editor NEW
209.0 23.0 62.0 15.62 MB

Unified SQL Analytics Engine Based on SparkSQL

Home Page: https://qihoo360.github.io/XSQL/

License: Apache License 2.0

Shell 0.86% Batchfile 0.13% PowerShell 0.02% Python 12.74% Dockerfile 0.02% Roff 0.10% R 0.01% HTML 0.32% Scala 68.12% Makefile 0.04% ANTLR 0.32% Java 6.30% CSS 0.01% JavaScript 0.11% PLpgSQL 0.05% TSQL 1.63% Thrift 0.15% HiveQL 8.71% q 0.37%
sql spark hive datasource elasticsearch federation

xsql's Introduction

XSQL-logo

English | 中文

XSQL is a multi-datasource query engine which designed easy to use, stable to run.1)First of all, XSQL provides a solution to read data from NoSQL database with standard SQL,so that big data engineer can concentrate on data but API with special data source . 2)XSQL takes some efforts of optimizing the execute plan of SQL execution as well as monitoring the running status of every SQL, which make user's job running healthier.

https://qihoo360.github.io/XSQL/

Features

  • XSQL supports eight built-in data source for now (e.g. Hive, Mysql, EleasticSearch, Mongo, Kafka, Hbase, Redis, Druid).
  • XSQL designs a 3-layer metadata architecture to organize data, which is datasource-database-table. So , we can provide a unified view of many data sources and no longer difficult to make a business analytical between off-line data and on-line data .
  • The main idea of XSQL are SQL Everything , SQL let program decoupling with concrete data source API , therefore DBA can upgrade data but need to taking into consideration how to migrating old tasks . More importantly, data analysts prefer SQL rather than special APIs.
  • XSQL only takes use of YARN cluster resources when necessary, this feature is useful for some usage scenario such as user treated spark-xsql as substitution of RDMS Client. We call this Pushdown Query, it makes XSQL get the ability to response DDL and Simple Query in ms delay level, as well as saving cluster resources as much as possible.
  • XSQL uses a different solution than routing , So it only parses SQL once.
  • XSQL caches metadata in runtime but don't manage metadata itself,in consideration of metadata synchronize may cause unnecessary trouble. This feature makes XSQL easy to deploy and ops.
  • XSQL provides a white-blacklist properties file to cover special usage scenario metadata should be carefully authorized.
  • XSQL can run on spark2.3 and spark2.4 for now. Jars of XSQL placed in isolated directory, which means XSQL won't take effect on your existed spark program unless you use our tool bin/spark-xsql. So, just try XSQL on your existed spark distribution, All things will work fine as normal.

Quick Start

Environment Requirements of Build

  • jdk 1.8+

Build XSQL:

  1. To get started with XSQL, you can build it by yourself. For example,

    git clone https://github.com/Qihoo360/XSQL
    

    You can also get pre-built XSQL from Release Pages .

  2. When you want to create a XSQL distribution of source code, which is similar to the release package in the Release Pages , use build-plugin.sh in the root directory of project. For example:

    XSQL/build-plugin.sh
    

    This will produce a .tgz file named like xsql-[project.version]-plugin-spark-[spark.version].tgz in the root directory of project.

    To create a XSQL distribution like natural Spark distribution, use build.sh in the root directory of project. For example:

    XSQL/build.sh
    

    This will produce a .tgz file named like xsql-[project.version]-bin-spark-[spark.version].tgz in the root directory of project.

Environment Requirements of Running

  • jdk 1.8+

  • hadoop 2.7.2+

  • spark 2.4.x

Installing XSQL:

  1. Build the XSQL tar xsql-[project.version]-[plugin|bin]-spark-[spark.version].tgz following the above steps or Download from Release Pages.

  2. If you have installed spark in your machine, please use the plugin version which size is 30M+. Or you need to install the bin version, which is about 300M + in size, which is far more than the plugin version.

    For either plugin version or bin version, both need to be extracted into your software directory at first. For example:

    tar xvf xsql-0.6.0-bin-spark-2.4.3.tgz -C "/path/of/software"

    The destination directory of plugin version is different to bin version:

    tar xvf xsql-0.6.0-plugin-spark-2.4.3.tgz -C "/path/of/sparkhome"
  3. XSQL needs to know the information (like url and authorization ) of each data source .You can configure them in xsql.conf under conf directory. We provided a template file to help user configuring XSQL. For example:

    mv conf/xsql.conf.template xsql.conf
    

    There is an example of MySQL configuration:

    spark.xsql.datasources                     default
    spark.xsql.default.database                real_database
    spark.xsql.datasource.default.type         mysql
    spark.xsql.datasource.default.url          jdbc:mysql://127.0.0.1:2336
    spark.xsql.datasource.default.user         real_username
    spark.xsql.datasource.default.password     real_password
    spark.xsql.datasource.default.version      5.6.19
    

Running XSQL:

  1. If you are familiar with spark-sql , we provide an improved bash tool bin/spark-xsql. XSQL can be started in Cli mode by following command:

    $SPARK_HOME/bin/spark-xsql

    Feel free to input any SQL/HiveSQL in the prompt line:

    spark-xsql> show datasources;
    
  2. If you are familiar with DataSet API, start from our scala api is a good choice. For example:

    var spark = SparkSession
      .builder()
      .enableXSQLSupport()
      .getOrCreate()
    spark.sql("show datasources")

FAQ

Connect to more datasource

Advanced Configuration

XSQL Specific Query Language

Contact Us

Mail Lists: For developers [email protected], For users [email protected]. Add yours by emailing it.

QQ Group for Chinese user : No.838910008

xsql's People

Contributors

beliefer avatar weiwenda avatar wenfang6 avatar zhangbinzaifendou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xsql's Issues

编译完整版后运行spark-xsql报异常

环境信息:
CentOS Linux release 7.4.1708 (Core)
git branch

  • (分离自 v0.6.1)
    master
    java -version
    java version "1.8.0_181"
    Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
    echo $JAVA_HOME
    /usr/java/jdk1.8.0_181-cloudera

配置信息:
vim ./conf/xsql.conf
spark.xsql.datasources default
spark.xsql.default.database test
spark.xsql.datasource.default.type mysql
spark.xsql.datasource.default.url jdbc:mysql://10.10.1.41:3306
spark.xsql.datasource.default.user test
spark.xsql.datasource.default.password test@123
spark.xsql.datasource.default.version 5.7.22

执行信息:
./spark-xsql
20/01/08 21:20:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/01/08 21:20:11 INFO SparkContext: Running Spark version 2.4.3
20/01/08 21:20:11 INFO SparkContext: Submitted application: org.apache.spark.sql.xsql.shell.SparkXSQLShell
20/01/08 21:20:11 INFO SecurityManager: Changing view acls to: root
20/01/08 21:20:11 INFO SecurityManager: Changing modify acls to: root
20/01/08 21:20:11 INFO SecurityManager: Changing view acls groups to:
20/01/08 21:20:11 INFO SecurityManager: Changing modify acls groups to:
20/01/08 21:20:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
20/01/08 21:20:11 INFO Utils: Successfully started service 'sparkDriver' on port 38039.
20/01/08 21:20:11 INFO SparkEnv: Registering MapOutputTracker
20/01/08 21:20:11 INFO SparkEnv: Registering BlockManagerMaster
20/01/08 21:20:11 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/01/08 21:20:11 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/01/08 21:20:11 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-799b3235-6aa8-4068-b013-f5caa4614c90
20/01/08 21:20:11 INFO MemoryStore: MemoryStore started with capacity 2.5 GB
20/01/08 21:20:11 INFO SparkEnv: Registering OutputCommitCoordinator
20/01/08 21:20:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/01/08 21:20:11 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hdp-client-02:4040
20/01/08 21:20:11 INFO SparkContext: Added JAR file:/data/tools/xsql-0.6.0/xsql-0.6.1-bin-spark-2.4.3/jars/xsql-shell_2.11-0.6.1-SNAPSHOT.jar at spark://hdp-client-02:38039/jars/xsql-shell_2.11-0.6.1-SNAPSHOT.jar with timestamp 1578489611959
20/01/08 21:20:12 INFO Executor: Starting executor ID driver on host localhost
20/01/08 21:20:12 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45532.
20/01/08 21:20:12 INFO NettyBlockTransferService: Server created on hdp-client-02:45532
20/01/08 21:20:12 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/01/08 21:20:12 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hdp-client-02, 45532, None)
20/01/08 21:20:12 INFO BlockManagerMasterEndpoint: Registering block manager hdp-client-02:45532 with 2.5 GB RAM, BlockManagerId(driver, hdp-client-02, 45532, None)
20/01/08 21:20:12 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hdp-client-02, 45532, None)
20/01/08 21:20:12 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hdp-client-02, 45532, None)
20/01/08 21:20:12 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/data/tools/xsql-0.6.0/xsql-0.6.1-bin-spark-2.4.3/bin/spark-warehouse').
20/01/08 21:20:12 INFO SharedState: Warehouse path is 'file:/data/tools/xsql-0.6.0/xsql-0.6.1-bin-spark-2.4.3/bin/spark-warehouse'.
20/01/08 21:20:13 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/01/08 21:20:13 INFO XSQLExternalCatalog: reading xsql configuration from /data/tools/xsql-0.6.0/xsql-0.6.1-bin-spark-2.4.3/conf/xsql.conf
20/01/08 21:20:13 INFO XSQLExternalCatalog: parse data source default
Exception in thread "main" java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.xsql.XSQLExternalCatalog':
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:223)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:104)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:103)
at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.externalCatalog(XSQLSessionStateBuilder.scala:60)
at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog$lzycompute(XSQLSessionStateBuilder.scala:73)
at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog(XSQLSessionStateBuilder.scala:71)
at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog(XSQLSessionStateBuilder.scala:57)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$1.apply(BaseSessionStateBuilder.scala:291)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$1.apply(BaseSessionStateBuilder.scala:291)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:77)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:77)
at org.apache.spark.sql.xsql.ResolveScanSingleTable.(XSQLStrategies.scala:69)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:55)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet successfully received from the server was 305 milliseconds ago. The last packet sent successfully to the server was 296 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.ExportControlled.transformSocketToSSLSocket(ExportControlled.java:201)
at com.mysql.jdbc.MysqlIO.negotiateSSLConnection(MysqlIO.java:4912)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1663)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1224)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2190)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2221)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2016)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:776)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.spark.sql.xsql.manager.MysqlManager.getConnect(MysqlManager.scala:75)
at org.apache.spark.sql.xsql.manager.MysqlManager.cacheDatabase(MysqlManager.scala:152)
at org.apache.spark.sql.xsql.DataSourceManager$class.parse(DataSourceManager.scala:202)
at org.apache.spark.sql.xsql.manager.MysqlManager.parse(MysqlManager.scala:51)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.addDataSource(XSQLExternalCatalog.scala:439)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$setupAndInitMetadata$4.apply(XSQLExternalCatalog.scala:172)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$setupAndInitMetadata$4.apply(XSQLExternalCatalog.scala:162)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.setupAndInitMetadata(XSQLExternalCatalog.scala:162)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.(XSQLExternalCatalog.scala:119)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:214)
... 25 more
Caused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1964)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:328)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1614)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:987)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at com.mysql.jdbc.ExportControlled.transformSocketToSSLSocket(ExportControlled.java:186)
... 58 more
Caused by: java.security.cert.CertificateException: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
at com.mysql.jdbc.ExportControlled$X509TrustManagerWrapper.checkServerTrusted(ExportControlled.java:302)
at sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:992)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1596)
... 66 more
Caused by: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
at sun.security.provider.certpath.PKIXCertPathValidator.validate(PKIXCertPathValidator.java:154)
at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:80)
at java.security.cert.CertPathValidator.validate(CertPathValidator.java:292)
at com.mysql.jdbc.ExportControlled$X509TrustManagerWrapper.checkServerTrusted(ExportControlled.java:295)
... 68 more
20/01/08 21:20:13 INFO SparkContext: Invoking stop() from shutdown hook
20/01/08 21:20:13 INFO SparkUI: Stopped Spark web UI at http://hdp-client-02:4040
20/01/08 21:20:13 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/01/08 21:20:13 INFO MemoryStore: MemoryStore cleared
20/01/08 21:20:13 INFO BlockManager: BlockManager stopped
20/01/08 21:20:13 INFO BlockManagerMaster: BlockManagerMaster stopped
20/01/08 21:20:14 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/01/08 21:20:14 INFO SparkContext: Successfully stopped SparkContext
20/01/08 21:20:14 INFO ShutdownHookManager: Shutdown hook called
20/01/08 21:20:14 INFO ShutdownHookManager: Deleting directory /tmp/spark-7f618cfa-d370-4a88-a6c5-8effc8f275e2
20/01/08 21:20:14 INFO ShutdownHookManager: Deleting directory /tmp/spark-a7925d7e-4b42-444d-bcc0-be1019ea63c7

数据源级别的表和字段的关于大小写敏感

全局设置 set spark.sql.caseSensitive=true 可以支持大小写敏感.

可以支持具体不同数据源的大小写敏感吗? 比如Clickhouse是大小写敏感的, 其他的数据库如果是大小写不敏感就设置成大小写不敏感,

不知道这样是否是可以直接支持或者很容易可以做到的

about subsql in XSQL

for example i have a table, table name : sys.sys_config

desc sys.sys_config

variable string NULL
value string NULL
set_time timestamp NULL
set_by string NULL

i use sql:
select approx_count_distinct(variable) count from sys.sys_config

can get the result: 6

but if i use sql like this:

select * from (select approx_count_distinct(variable) count from sys.sys_config) as t

it will be error:

org.apache.spark.sql.AnalysisException: org.apache.spark.SparkException: Error when execute select t.count from (select approx_count_distinct(sys.sys_config.variable) AS count from sys.sys_config ) as t, details:
FUNCTION sys.approx_count_distinct does not exist;
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree2$1(XSQLExternalCatalog.scala:658)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.withWorkingDSDB(XSQLExternalCatalog.scala:648)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.scanTables(XSQLExternalCatalog.scala:380)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$scanTables$1.apply(XSQLSessionCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$scanTables$1.apply(XSQLSessionCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.setWorkingDataSource(XSQLExternalCatalog.scala:611)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.setWorkingDataSource(XSQLSessionCatalog.scala:151)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.scanTables(XSQLSessionCatalog.scala:624)
at org.apache.spark.sql.xsql.execution.command.PushDownQueryCommand.run(tables.scala:729)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:71)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:69)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.unsafeResult$lzycompute(commands.scala:77)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.unsafeResult(commands.scala:74)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:88)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
at org.apache.spark.sql.Dataset.(Dataset.scala:194)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:665)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:252)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:166)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1(SparkXSQLShell.scala:166)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.process$1(SparkXSQLShell.scala:310)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$loop$1(SparkXSQLShell.scala:350)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:94)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:76)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:76)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Error when execute select t.count from (select approx_count_distinct(sys.sys_config.variable) AS count from sys.sys_config ) as t, details:
FUNCTION sys.approx_count_distinct does not exist
at org.apache.spark.sql.xsql.manager.MysqlManager.scanXSQLTables(MysqlManager.scala:521)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$scanTables$1.apply(XSQLExternalCatalog.scala:382)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$scanTables$1.apply(XSQLExternalCatalog.scala:380)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree2$1(XSQLExternalCatalog.scala:649)
... 47 more

so , i think , maybe it use mysql function, mysql function not found it.

but , i test anther function: curtime

select * from (select curtime()) as t;

will be get another error:

19/10/31 20:54:54 INFO SparkXSQLShell: current SQL: select * from (select curtime()) as t silent: false
19/10/31 20:54:54 INFO SparkXSQLShell: excute to parsed
19/10/31 20:54:54 ERROR SparkXSQLShell: Failed: Error
org.apache.spark.sql.AnalysisException: java.lang.UnsupportedOperationException: Check MYSQL function exists not supported!;
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree1$1(XSQLExternalCatalog.scala:635)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.withWorkingDS(XSQLExternalCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.functionExists(XSQLExternalCatalog.scala:1074)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.functionExists(ExternalCatalogWithListener.scala:292)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.isPersistentFunction(SessionCatalog.scala:1227)
at org.apache.spark.sql.hive.HiveSessionCatalog.isPersistentFunction(HiveSessionCatalog.scala:179)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.org$apache$spark$sql$xsql$XSQLSessionCatalog$$super$isPersistentFunction(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply$mcZ$sp(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.setWorkingDataSource(XSQLExternalCatalog.scala:611)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.setWorkingDataSource(XSQLSessionCatalog.scala:151)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.isPersistentFunction(XSQLSessionCatalog.scala:792)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1276)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1272)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:255)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsDown$1.apply(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsDown$1.apply(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:104)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$2.apply(QueryPlan.scala:121)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:121)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:126)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:126)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:74)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:129)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:128)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveExpressions(AnalysisHelper.scala:128)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveExpressions(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:1272)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:1269)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:35)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:241)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:166)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1(SparkXSQLShell.scala:166)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.process$1(SparkXSQLShell.scala:310)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$loop$1(SparkXSQLShell.scala:350)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:94)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:76)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:76)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsupportedOperationException: Check MYSQL function exists not supported!
at org.apache.spark.sql.xsql.DataSourceManager$class.functionExists(DataSourceManager.scala:920)
at org.apache.spark.sql.xsql.manager.MysqlManager.functionExists(MysqlManager.scala:51)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$functionExists$1.apply(XSQLExternalCatalog.scala:1076)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$functionExists$1.apply(XSQLExternalCatalog.scala:1074)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree1$1(XSQLExternalCatalog.scala:626)
... 118 more

xsql.conf文件中配置多数据源无法运行

1、只配置mysql数据源,运行OK

spark.xsql.datasources                     default
spark.xsql.default.database                linkis
spark.xsql.datasource.default.type         mysql
spark.xsql.datasource.default.url          jdbc:mysql://172.19.101.47:3306
spark.xsql.datasource.default.user         root
spark.xsql.datasource.default.password     Pwd@123456
spark.xsql.datasource.default.version      5.6.19

2、只配置hdp 3.1.0 Hive数据源,运行OK

spark.xsql.datasources default
spark.xsql.datasource.default.type             hive
spark.xsql.datasource.default.metastore.url   thrift://hdfs02-dev.yingzi.com:9083
spark.xsql.datasource.default.user             test
spark.xsql.datasource.default.password        test
spark.xsql.datasource.default.version         3.1.0

3、同时配置mysql、hive数据源,执行报错

spark.xsql.datasources                     default
spark.xsql.default.database                linkis
spark.xsql.datasource.default.type         mysql
spark.xsql.datasource.default.url          jdbc:mysql://172.19.101.47:3306
spark.xsql.datasource.default.user         root
spark.xsql.datasource.default.password     Pwd@123456
spark.xsql.datasource.default.version      5.6.19

spark.xsql.datasources defaulthive
spark.xsql.datasource.defaulthive.type             hive
spark.xsql.datasource.defaulthive.metastore.url   thrift://hdfs02-dev.yingzi.com:9083
spark.xsql.datasource.defaulthive.user             test
spark.xsql.datasource.defaulthive.password        test
spark.xsql.datasource.defaulthive.version         3.1.0

错误如下:


19/10/28 18:18:33 INFO SharedState: Warehouse path is 'file:/data/bigdata/xsql/xsql/spark-warehouse'.
19/10/28 18:18:34 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/10/28 18:18:34 INFO XSQLExternalCatalog: reading xsql configuration from /data/bigdata/xsql/xsql/conf/xsql.conf
Exception in thread "main" java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.xsql.XSQLExternalCatalog':
        at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:223)
        at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:104)
        at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:103)
        at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.externalCatalog(XSQLSessionStateBuilder.scala:60)
        at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog$lzycompute(XSQLSessionStateBuilder.scala:73)
        at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog(XSQLSessionStateBuilder.scala:71)
        at org.apache.spark.sql.xsql.XSQLSessionStateBuilder.catalog(XSQLSessionStateBuilder.scala:57)
        at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$1.apply(BaseSessionStateBuilder.scala:291)
        at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$1.apply(BaseSessionStateBuilder.scala:291)
        at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:77)
        at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:77)
        at org.apache.spark.sql.xsql.ResolveScanSingleTable.<init>(XSQLStrategies.scala:69)
        at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:55)
        at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: default data source must configured!
        at org.apache.spark.sql.xsql.XSQLExternalCatalog.setupAndInitMetadata(XSQLExternalCatalog.scala:160)
        at org.apache.spark.sql.xsql.XSQLExternalCatalog.<init>(XSQLExternalCatalog.scala:119)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:214)
        ... 25 more

How to use jdbc to connect to XSQL

hi,all :
I have successfully deploy xsql-0.6.0-bin-spark-2.4.3 , and tested with cil . but after I started the thriftserver I could not query out anything.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.