Giter Site home page Giter Site logo

java-bigquery's Introduction

Google Cloud BigQuery Client for Java

Java idiomatic client for Cloud BigQuery.

Maven Stability

Quickstart

If you are using Maven with BOM, add this to your pom.xml file:

<!--  Using libraries-bom to manage versions.
See https://github.com/GoogleCloudPlatform/cloud-opensource-java/wiki/The-Google-Cloud-Platform-Libraries-BOM -->
<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>com.google.cloud</groupId>
      <artifactId>libraries-bom</artifactId>
      <version>26.20.0</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

<dependencies>
  <dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-bigquery</artifactId>
  </dependency>
</dependencies>

If you are using Maven without the BOM, add this to your dependencies:

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-bigquery</artifactId>
  <version>2.39.0</version>
</dependency>

If you are using Gradle 5.x or later, add this to your dependencies:

implementation platform('com.google.cloud:libraries-bom:26.37.0')

implementation 'com.google.cloud:google-cloud-bigquery'

If you are using Gradle without BOM, add this to your dependencies:

implementation 'com.google.cloud:google-cloud-bigquery:2.39.0'

If you are using SBT, add this to your dependencies:

libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "2.39.0"

Authentication

See the Authentication section in the base directory's README.

Authorization

The client application making API calls must be granted authorization scopes required for the desired Cloud BigQuery APIs, and the authenticated principal must have the IAM role(s) required to access GCP resources using the Cloud BigQuery API calls.

Getting Started

Prerequisites

You will need a Google Cloud Platform Console project with the Cloud BigQuery API enabled. You will need to enable billing to use Google Cloud BigQuery. Follow these instructions to get your project set up. You will also need to set up the local development environment by installing the Google Cloud Command Line Interface and running the following commands in command line: gcloud auth login and gcloud config set project [YOUR PROJECT ID].

Installation and setup

You'll need to obtain the google-cloud-bigquery library. See the Quickstart section to add google-cloud-bigquery as a dependency in your code.

About Cloud BigQuery

Cloud BigQuery is a fully managed, NoOps, low cost data analytics service. Data can be streamed into BigQuery at millions of rows per second to enable real-time analysis. With BigQuery you can easily deploy Petabyte-scale Databases.

See the Cloud BigQuery client library docs to learn how to use this Cloud BigQuery Client Library.

Samples

Samples are in the samples/ directory.

Sample Source Code Try it
Native Image Bigquery Sample source code Open in Cloud Shell
Add Column Load Append source code Open in Cloud Shell
Add Empty Column source code Open in Cloud Shell
Auth Drive Scope source code Open in Cloud Shell
Auth Snippets source code Open in Cloud Shell
Auth User Flow source code Open in Cloud Shell
Auth User Query source code Open in Cloud Shell
Authorize Dataset source code Open in Cloud Shell
Authorized View Tutorial source code Open in Cloud Shell
Browse Table source code Open in Cloud Shell
Cancel Job source code Open in Cloud Shell
Copy Multiple Tables source code Open in Cloud Shell
Copy Table source code Open in Cloud Shell
Copy Table Cmek source code Open in Cloud Shell
Create And Query Repeated Record Field source code Open in Cloud Shell
Create Clustered Table source code Open in Cloud Shell
Create Dataset source code Open in Cloud Shell
Create Dataset Aws source code Open in Cloud Shell
Create Dataset With Regional Endpoint source code Open in Cloud Shell
Create External Table Aws source code Open in Cloud Shell
Create Iam Policy source code Open in Cloud Shell
Create Job source code Open in Cloud Shell
Create Materialized View source code Open in Cloud Shell
Create Model source code Open in Cloud Shell
Create Partitioned Table source code Open in Cloud Shell
Create Range Partitioned Table source code Open in Cloud Shell
Create Routine source code Open in Cloud Shell
Create Routine Ddl source code Open in Cloud Shell
Create Table source code Open in Cloud Shell
Create Table Cmek source code Open in Cloud Shell
Create Table External Hive Partitioned source code Open in Cloud Shell
Create Table Without Schema source code Open in Cloud Shell
Create Tables With Primary And Foreign Keys source code Open in Cloud Shell
Create View source code Open in Cloud Shell
Dataset Exists source code Open in Cloud Shell
Ddl Create View source code Open in Cloud Shell
Delete Dataset source code Open in Cloud Shell
Delete Dataset And Contents source code Open in Cloud Shell
Delete Label Dataset source code Open in Cloud Shell
Delete Label Table source code Open in Cloud Shell
Delete Materialized View source code Open in Cloud Shell
Delete Model source code Open in Cloud Shell
Delete Routine source code Open in Cloud Shell
Delete Table source code Open in Cloud Shell
Export Query Results To S3 source code Open in Cloud Shell
Extract Model source code Open in Cloud Shell
Extract Table Compressed source code Open in Cloud Shell
Extract Table To Csv source code Open in Cloud Shell
Extract Table To Json source code Open in Cloud Shell
Get Dataset Info source code Open in Cloud Shell
Get Dataset Labels source code Open in Cloud Shell
Get Job source code Open in Cloud Shell
Get Model source code Open in Cloud Shell
Get Routine source code Open in Cloud Shell
Get Table source code Open in Cloud Shell
Get Table Labels source code Open in Cloud Shell
Get View source code Open in Cloud Shell
Grant View Access source code Open in Cloud Shell
Inserting Data Types source code Open in Cloud Shell
Label Dataset source code Open in Cloud Shell
Label Table source code Open in Cloud Shell
List Datasets source code Open in Cloud Shell
List Datasets By Label source code Open in Cloud Shell
List Jobs source code Open in Cloud Shell
List Models source code Open in Cloud Shell
List Routines source code Open in Cloud Shell
List Tables source code Open in Cloud Shell
Load Avro From Gcs source code Open in Cloud Shell
Load Avro From Gcs Truncate source code Open in Cloud Shell
Load Csv From Gcs source code Open in Cloud Shell
Load Csv From Gcs Autodetect source code Open in Cloud Shell
Load Csv From Gcs Truncate source code Open in Cloud Shell
Load Json From Gcs source code Open in Cloud Shell
Load Json From Gcs Autodetect source code Open in Cloud Shell
Load Json From Gcs Cmek source code Open in Cloud Shell
Load Json From Gcs Truncate source code Open in Cloud Shell
Load Local File source code Open in Cloud Shell
Load Local File In Session source code Open in Cloud Shell
Load Orc From Gcs source code Open in Cloud Shell
Load Orc From Gcs Truncate source code Open in Cloud Shell
Load Parquet source code Open in Cloud Shell
Load Parquet Replace Table source code Open in Cloud Shell
Load Partitioned Table source code Open in Cloud Shell
Load Table Clustered source code Open in Cloud Shell
Nested Repeated Schema source code Open in Cloud Shell
Query Batch source code Open in Cloud Shell
Query Clustered Table source code Open in Cloud Shell
Query Destination Table Cmek source code Open in Cloud Shell
Query Disable Cache source code Open in Cloud Shell
Query Dry Run source code Open in Cloud Shell
Query External Bigtable Perm source code Open in Cloud Shell
Query External Bigtable Temp source code Open in Cloud Shell
Query External Gcs Perm source code Open in Cloud Shell
Query External Gcs Temp source code Open in Cloud Shell
Query External Sheets Perm source code Open in Cloud Shell
Query External Sheets Temp source code Open in Cloud Shell
Query External Table Aws source code Open in Cloud Shell
Query Large Results source code Open in Cloud Shell
Query Materialized View source code Open in Cloud Shell
Query Pagination source code Open in Cloud Shell
Query Partitioned Table source code Open in Cloud Shell
Query Script source code Open in Cloud Shell
Query Total Rows source code Open in Cloud Shell
Query With Array Of Structs Named Parameters source code Open in Cloud Shell
Query With Array Parameters source code Open in Cloud Shell
Query With Named Parameters source code Open in Cloud Shell
Query With Named Types Parameters source code Open in Cloud Shell
Query With Positional Parameters source code Open in Cloud Shell
Query With Positional Types Parameters source code Open in Cloud Shell
Query With Structs Parameters source code Open in Cloud Shell
Query With Timestamp Parameters source code Open in Cloud Shell
Quickstart Sample source code Open in Cloud Shell
Relax Column Load Append source code Open in Cloud Shell
Relax Column Mode source code Open in Cloud Shell
Relax Table Query source code Open in Cloud Shell
Resource Clean Up source code Open in Cloud Shell
Run Legacy Query source code Open in Cloud Shell
Save Query To Table source code Open in Cloud Shell
Set User Agent source code Open in Cloud Shell
Simple App source code Open in Cloud Shell
Simple Query source code Open in Cloud Shell
Table Exists source code Open in Cloud Shell
Table Insert Rows source code Open in Cloud Shell
Table Insert Rows Without Row Ids source code Open in Cloud Shell
Undelete Table source code Open in Cloud Shell
Update Dataset Access source code Open in Cloud Shell
Update Dataset Description source code Open in Cloud Shell
Update Dataset Expiration source code Open in Cloud Shell
Update Dataset Partition Expiration source code Open in Cloud Shell
Update Iam Policy source code Open in Cloud Shell
Update Materialized View source code Open in Cloud Shell
Update Model Description source code Open in Cloud Shell
Update Routine source code Open in Cloud Shell
Update Table Cmek source code Open in Cloud Shell
Update Table Description source code Open in Cloud Shell
Update Table Dml source code Open in Cloud Shell
Update Table Expiration source code Open in Cloud Shell
Update Table Require Partition Filter source code Open in Cloud Shell
Update View Query source code Open in Cloud Shell

Troubleshooting

To get help, follow the instructions in the shared Troubleshooting document.

Supported Java Versions

Java 8 or above is required for using this client.

Google's Java client libraries, Google Cloud Client Libraries and Google Cloud API Libraries, follow the Oracle Java SE support roadmap (see the Oracle Java SE Product Releases section).

For new development

In general, new feature development occurs with support for the lowest Java LTS version covered by Oracle's Premier Support (which typically lasts 5 years from initial General Availability). If the minimum required JVM for a given library is changed, it is accompanied by a semver major release.

Java 11 and (in September 2021) Java 17 are the best choices for new development.

Keeping production systems current

Google tests its client libraries with all current LTS versions covered by Oracle's Extended Support (which typically lasts 8 years from initial General Availability).

Legacy support

Google's client libraries support legacy versions of Java runtimes with long term stable libraries that don't receive feature updates on a best efforts basis as it may not be possible to backport all patches.

Google provides updates on a best efforts basis to apps that continue to use Java 7, though apps might need to upgrade to current versions of the library that supports their JVM.

Where to find specific information

The latest versions and the supported Java versions are identified on the individual GitHub repository github.com/GoogleAPIs/java-SERVICENAME and on google-cloud-java.

Versioning

This library follows Semantic Versioning.

Contributing

Contributions to this library are always welcome and highly encouraged.

See CONTRIBUTING for more information how to get started.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.

License

Apache 2.0 - See LICENSE for more information.

CI Status

Java Version Status
Java 8 Kokoro CI
Java 8 OSX Kokoro CI
Java 8 Windows Kokoro CI
Java 11 Kokoro CI

Java is a registered trademark of Oracle and/or its affiliates.

java-bigquery's People

Contributors

ajaaym avatar andreamlin avatar chingor13 avatar dependabot[bot] avatar ejdarrow avatar elharo avatar emkornfield avatar farhan0102 avatar franklinwhaite avatar garrettjonesgoogle avatar gcf-owl-bot[bot] avatar irvifa avatar jesselovelace avatar kolea2 avatar mpeddada1 avatar neenu1995 avatar neozwu avatar obada-ab avatar phongchuong avatar pongad avatar prash-mi avatar release-please[bot] avatar renovate-bot avatar shollyman avatar stephaniewang526 avatar suztomo avatar tswast avatar vam-google avatar yihanzhen avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-bigquery's Issues

com.google.cloud.bigquery.BigQueryException: Read timed out while closing the TableDataWriteChannel

On of the customer using Striim's BigQueryConnector is experiencing the following exception

2020-02-13 14:34:12,675 @S10_111_12_27 @NULL_APPNAME -ERROR tkp_goal.tkp_goal_bq-0 com.striim.bigquery.TransferCSVDataToBQ.transferToBQ (TransferCSVDataToBQ.java:139) Bigquery exception while loading
com.google.cloud.bigquery.BigQueryException: Read timed out
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:99)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.write(HttpBigQueryRpc.java:465)
at com.google.cloud.bigquery.TableDataWriteChannel$1.call(TableDataWriteChannel.java:56)
at com.google.cloud.bigquery.TableDataWriteChannel$1.call(TableDataWriteChannel.java:53)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.bigquery.TableDataWriteChannel.flushBuffer(TableDataWriteChannel.java:52)
at com.google.cloud.BaseWriteChannel.close(BaseWriteChannel.java:161)
at java.nio.channels.Channels$1.close(Channels.java:178)
at com.striim.bigquery.TransferCSVDataToBQ.transferToBQ(TransferCSVDataToBQ.java:135)
at com.striim.bigquery.BigQueryIntegrationTask.transferData(BigQueryIntegrationTask.java:58)
at com.striim.bigquery.BigQueryIntegrationTask.execute(BigQueryIntegrationTask.java:88)
at com.striim.dwhwriter.integrator.IntegrationTask.call(IntegrationTask.java:46)
at com.striim.dwhwriter.integrator.IntegrationTask.call(IntegrationTask.java:23)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352)
at com.google.api.client.http.javanet.NetHttpResponse.(NetHttpResponse.java:37)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:105)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
... 17 more
This is happening while we try to upload a csv file to the BQ table.
Code snippet:

TableDataWriteChannel writeChannel = client.writer(jobId,writeChannelConfiguration);

try(OutputStream stream = Channels.newOutputStream(writeChannel)){ // The line number in the issue is when it tries to close "stream" which is a "TableDataWriteChannel"
    Files.copy(csvFilePath, stream);
}
catch (IOException ex){
    throw new AdapterException("Exception occurred during file upload: " + ex);
} catch (BigQueryException be) {
    logger.error("Bigquery exception while loading ", be);

This happened with client version 1.35.0

com.google.cloud google-cloud-bigquery 1.35.0

After the Files.copy is over it tries to close the stream, as a part of the close it calls "com.google.cloud.bigquery.TableDataWriteChannel.flushBuffer(TableDataWriteChannel.java:52)" where it tries to write any last bytes of data and while writing there is a IOException in the following line
HttpBigQueryRpc.java googleapis/google-cloud-java#465 and its translated to a BigQuery exception.

There is not enough documentation above when the exception can happen or how it can be resolved.

Best practice to send data to bigquery from multiple services

Hi,

I have a use case where I have 1000s of micro services. And all these services will be generate some json data that needs to be stored in big query. If I have to call big query from individual service and store the data in BigQuery, what is the recommended way of doing? Should I use streaming insert or do a load job ? Max should be 100 services calling big query in parallel to send their json data.

Thanks,
Mani

Synthesis failed for java-bigquery

Hello! Autosynth couldn't regenerate java-bigquery. 💔

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to a new branch 'autosynth'
Cloning into '/tmpfs/tmp/tmpmr2nwz21/synthtool'...
Switched to branch 'autosynth-self'
Note: checking out '050e708606e662adc5aa5705bbf8715f3d1e3686'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 050e708 changes without context (#301)
Note: checking out '716f741f2d307b48cbe8a5bc3bc883571212344a'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 716f741 fix(python): adjust regex for fix_pb2_headers (#500)
Switched to a new branch 'autosynth-self-1'
2020-04-30 13:57:47 [INFO] Running synthtool
2020-04-30 13:57:47 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "/tmpfs/src/github/synthtool/synthtool/__init__.py", line 21, in <module>
    from synthtool import update_check
  File "/tmpfs/src/github/synthtool/synthtool/update_check.py", line 19, in <module>
    import packaging.version
ModuleNotFoundError: No module named 'packaging'
2020-04-30 13:57:47 [ERROR] Synthesis failed
HEAD is now at 050e708 changes without context (#301)
Switched to branch 'autosynth-self'
Note: checking out '050e708606e662adc5aa5705bbf8715f3d1e3686'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 050e708 changes without context (#301)
Previous HEAD position was 716f741 fix(python): adjust regex for fix_pb2_headers (#500)
HEAD is now at 6b685a2 fix: synthtool path (#515)
Switched to a new branch 'autosynth-7'
2020-04-30 13:57:47 [INFO] Running synthtool
2020-04-30 13:57:47 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "/tmpfs/src/github/synthtool/synthtool/__init__.py", line 21, in <module>
    from synthtool import update_check
  File "/tmpfs/src/github/synthtool/synthtool/update_check.py", line 19, in <module>
    import packaging.version
ModuleNotFoundError: No module named 'packaging'
2020-04-30 13:57:47 [ERROR] Synthesis failed
HEAD is now at 050e708 changes without context (#301)
Switched to branch 'autosynth'
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 576, in <module>
    main()
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 457, in main
    return _inner_main(temp_dir)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 566, in _inner_main
    commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 352, in synthesize_loop
    synthesize_inner_loop(toolbox, synthesizer)
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 362, in synthesize_inner_loop
    synthesizer, len(toolbox.versions) - 1
  File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 259, in synthesize_version_in_new_branch
    synthesizer.synthesize(self.environ)
  File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 115, in synthesize
    synth_proc.check_returncode()  # Raise an exception.
  File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
    self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.

Google internal developers can see the full log here.

Internal Deprecated method call

[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/main/java/com/google/cloud/bigquery/BigQueryImpl.java:[460,18] delete(java.lang.String,java.lang.String) in com.google.cloud.bigquery.BigQuery has been deprecated
[INFO] /home/elharo/java-bigquery/google-cloud-bigquery/src/main/java/com/google/cloud/bigquery/JobStatus.java: Some input files use unchecked or unsafe operations.

Remove ACL permissions for a dataset

Is there a way to remove ACL permission for a given Acl.User's email address from a BQ Dataset? There seems to be an example to add permissions, but not to remove them per Acl.User.

Seems like Entity has a Type but I'm unable to get the email address needed to be removed individually from the enum Type USER. Is there a workaround for this?

Thanks so much for your help.

Loading multiple parquet files / appending throws error

Hi,

I have used both the UI as well as CLI (haven't tried any of the APIs but I bet it would be the same) to load about 6TB of parquet files into BigQuery. The data works perfectly fine with SPARK and on Azure, unfortunately here I am getting an error:

CLI
bq load --replace --source_format=PARQUET Bmb.bmb_full gs://bmb/mde_to_gcp/*

Waiting on bqjob_r670ec74d17e6f1df_0000016eeca6a984_1 ... (8s) Current status: DONE
BigQuery error in load operation: Error processing job 'sap-crystal-ball-nxt:bqjob_r670ec74d17e6f1df_0000016eeca6a984_1': Error while reading data, error message: Provided schema is not compatible with
the file 'part-00114-fb8943dc-3fb6-4f1e-8832-d5eefc08114e-c000.snappy.parquet'. Field 'filename' is specified as NULLABLE in provided schema which does not match REQUIRED as specified in the file.

Unfortunately I can't get the stack trace back either:

bq --format=prettyjson show -j bqjob_r670ec74d17e6f1df_0000016eeca6a984_1

BigQuery error in show operation: Not found: Job sap-crystal-ball-nxt:bqjob_r670ec74d17e6f1df_0000016eeca6a984_1

Reads a bit like this one here:
googleapis/google-cloud-python#8305

Add test scope

    <dependency>
      <groupId>com.google.truth</groupId>
      <artifactId>truth</artifactId>
    </dependency>
    <dependency>
      <groupId>org.easymock</groupId>
      <artifactId>easymock</artifactId>
    </dependency>
    <dependency>
      <groupId>org.objenesis</groupId>
      <artifactId>objenesis</artifactId>
    </dependency>

BigQuery Java API: NPE on com.google.cloud.bigquery.StandardTableDefinition.fromPb(StandardTableDefinition.java:298)

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...") : BigQuery
    General, Core, and Other are also allowed as types
  2. OS type and version: Any OS
  3. Java version: 1.8
  4. google-cloud-java version(s):
    com.google.cloud
    google-cloud-bigquery
    1.110.0

Steps to reproduce

NOT ABLE TO REPRODUCE

Code example

Page<Table> tables = bigquery.listTables(datasetId, BigQuery.TableListOption.pageSize(1));
Iterable<Table> table_iterator = tables.iterateAll();
for (Table table : table_iterator){
if (table.getDefinition().getType().toString().equalsIgnoreCase("TABLE")) {
                                           System.out.println("table");
                                        }
}

Stack trace

Caused by: java.lang.NullPointerException: Null pointer - Got unexpected time partitioning {"field":"XXXXX"} in project XXXXXX in dataset XXXXX in table XXXXX java.lang.NullPointerException: Name is null
        at com.google.cloud.bigquery.StandardTableDefinition.fromPb(StandardTableDefinition.java:298)
        at com.google.cloud.bigquery.TableDefinition.fromPb(TableDefinition.java:151)
        at com.google.cloud.bigquery.TableInfo$BuilderImpl.<init>(TableInfo.java:188)
        at com.google.cloud.bigquery.Table.fromPb(Table.java:624)
        at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:847)
        at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:844)
        at com.google.common.collect.Iterators$6.transform(Iterators.java:786)
        at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)
        at com.google.cloud.PageImpl$PageIterator.computeNext(PageImpl.java:72)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136)

Any additional information below

This issue occurs on a few tables in Prod which I am not able to replicate in Dev.
The rpc call has the partition type as null but the TimePartitioning object in the response pb is not null which is leading to an NPE at this point.

builder.setTimePartitioning(TimePartitioning.fromPb(tablePb.getTimePartitioning()));

Support connectionProperties for job/query insertion

Once the discovery dependency is released at rev20200415 or later, we should include the ability to set connection properties as part of the job configuration for queries. They're key/value pairs, defined in discovery thusly:

"ConnectionProperty": {
"id": "ConnectionProperty",
"type": "object",
"properties": {
"value": {
"description": "[Required] Value of the connection property.",
"type": "string"
},
"key": {
"description": "[Required] Name of the connection property to set.",
"type": "string"
}
}
},

They're available in the discovery resources JobConfigurationQuery and QueryRequest as an array:

"connectionProperties": {
"type": "array",
"items": {
"$ref": "ConnectionProperty"
},
"description": "Connection properties."
},

Please include the necessary functionality in the manual layer to be able to set/interact connection properties. Will followup with more detail about available properties.

HivePartitioningOptions does not have option to set require partition filter

in bigquery API i see four options mode,uriprefix,requirepartitionfilter & fields.. but in java client i just see two options(first 2).. not sure if this was intentional..

https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#hivepartitioningoptions

requirePartitionFilter (boolean)

enabling requirepartition filter out side of hivepartitionoptions throws out error ---> Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "require_partition_filter has been set for a hive-partitioned table without using the HivePartitioningOptions. To configure external, hive-partitioned tables -- including setting require_partition_filter -- please use the HivePartitioningOptions. Setting require_partition_filter on the top-level table definition, or via the TimePartitioning field, does not configure hive-partitioned tables.",
"reason" : "invalid"
} ],
"message" : "require_partition_filter has been set for a hive-partitioned table without using the HivePartitioningOptions. To configure external, hive-partitioned tables -- including setting require_partition_filter -- please use the HivePartitioningOptions. Setting require_partition_filter on the top-level table definition, or via the TimePartitioning field, does not configure hive-partitioned tables.",
"status" : "INVALID_ARGUMENT"
}

Bigquery exception while loading com.google.cloud.bigquery.BigQueryException: Error writing request body to server

On of the customer using Striim's BigQueryConnector is experiencing the following exception

2020-02-20 20:50:41,385 @S10_111_12_29 @toko_product.tokopedia_product -ERROR toko_product.bq_tokopedia_product-0 com.striim.bigquery.TransferCSVDataToBQ.transferToBQ (TransferCSVDataToBQ.java:139) Bigquery exception while loading
com.google.cloud.bigquery.BigQueryException: Error writing request body to server
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:99)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.write(HttpBigQueryRpc.java:465)
at com.google.cloud.bigquery.TableDataWriteChannel$1.call(TableDataWriteChannel.java:56)
at com.google.cloud.bigquery.TableDataWriteChannel$1.call(TableDataWriteChannel.java:53)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.bigquery.TableDataWriteChannel.flushBuffer(TableDataWriteChannel.java:52)
at com.google.cloud.BaseWriteChannel.close(BaseWriteChannel.java:161)
at java.nio.channels.Channels$1.close(Channels.java:178)
at com.striim.bigquery.TransferCSVDataToBQ.transferToBQ(TransferCSVDataToBQ.java:135)
at com.striim.bigquery.BigQueryIntegrationTask.transferData(BigQueryIntegrationTask.java:58)
at com.striim.bigquery.BigQueryIntegrationTask.execute(BigQueryIntegrationTask.java:88)
at com.striim.dwhwriter.integrator.IntegrationTask.call(IntegrationTask.java:46)
at com.striim.dwhwriter.integrator.IntegrationTask.call(IntegrationTask.java:23)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Error writing request body to server
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3597)
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3580)
at com.google.api.client.util.ByteStreams.copy(ByteStreams.java:55)
at com.google.api.client.util.IOUtils.copy(IOUtils.java:94)
at com.google.api.client.http.AbstractInputStreamContent.writeTo(AbstractInputStreamContent.java:72)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:80)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.write(HttpBigQueryRpc.java:448)
... 17 more

This is happening while we try to upload a csv file to the BQ table.
Code snippet:

    // Describe the resulting table you are importing to:
    TableId tid = TableId.of(projectId,datasetId, tableId);

    Schema schema = client.getTable(tid).getDefinition().getSchema();
    //Loads of options available which can be exposed later
    CsvOptions csvOptions = CsvOptions
            .newBuilder()
            .setAllowQuotedNewLines(true)
            .setEncoding(encoding)
            // can be exposed but use case is very specific
            .setAllowJaggedRows(false)
            .setFieldDelimiter(columnDelimiter)
            .setQuote(quoteCharacter)
            .setSkipLeadingRows(0)
            .build();

    WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration
            .newBuilder(tid)
            .setFormatOptions(csvOptions)
            .setSchema(schema)
            .setMaxBadRecords(0)
            .setWriteDisposition(JobInfo.WriteDisposition.WRITE_APPEND)
            .setNullMarker(nullMarker)
            .build();

    Path csvFilePath = Paths.get(csvPath);

    JobId jobId = /* build job id */
    TableDataWriteChannel writeChannel = client.writer(jobId,writeChannelConfiguration);

    try(OutputStream stream = Channels.newOutputStream(writeChannel)){
        Files.copy(csvFilePath, stream);
    }
    catch (IOException ex){
        throw new AdapterException("Exception occurred during file upload: " + ex);
    } catch (BigQueryException be) {
        logger.error("Bigquery exception while loading ", be);

Could this be a due to network issues ? I saw a similar issues logged for GCS - googleapis/google-cloud-java#3410.

If its a similar issue has this been fixed for BigQuery already ? This happened with client version 1.35.0

   <dependency>
        <groupId>com.google.cloud</groupId>
        <artifactId>google-cloud-bigquery</artifactId>
        <version>1.35.0</version>
    </dependency>

Named parameters do not work with records/subfields

Named parameters do not work when executing a query with Record type attributes. The FieldValueList does not pass the schema to the FieldValue and then to the FieldValueList after FieldValue.getRecordValue() is called. This means the schema instance is no longer available and the FieldValueList.get(String name) method no longer works. Also, due to encapsulation, the schema cannot be provided to the FieldValueList once it has been created.

The expected behavior would be passing the schema to the FieldValueList when it is a record type so that named parameters continue to work.

More information required for exception "BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124"

The following execution error is seen while executing a job which tries to uploads the data from a local file to a Bigquery table

BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124} Execution errors: [BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124}]

Following is the code snippet :

   WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration
            .newBuilder(tid)
            .setFormatOptions(csvOptions)
            .setSchema(schema)
            .setMaxBadRecords(0)
            .setWriteDisposition(JobInfo.WriteDisposition.WRITE_APPEND)
            .setNullMarker(nullMarker)
            .build();

    Path csvFilePath = Paths.get(csvPath);

    JobId jobId = JobId
            .newBuilder()
            .setJob(/* a unique job id*/)
            .setLocation(location)
            .setProject(client.getOptions().getProjectId())
            .build();

    TableDataWriteChannel writeChannel = client.writer(jobId,writeChannelConfiguration);

    try(OutputStream stream = Channels.newOutputStream(writeChannel)){
        Files.copy(csvFilePath /*file to be uploaded to the big query table */, stream);
    }
    catch (IOException ex){
        //  throw it
    }

    if(completedJob != null && completedJob.getStatus().getError() == null){
         // do something
    }
    else{
        /*throw exception*/
    }

Problem encountered:

Often we get the error status (this has been experienced by couple of customers who use Striim)

BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124} Execution errors: [BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124}]

I couldn't find any explanation for the error code 4663124. When/Why would this occur ? And how to fix this error?

Bigquery version used is

com.google.cloud
google-cloud-bigquery
1.35.0

Region of table is US
Job ID : Striim_BigQueryWriter_raw_cdc_tkp_flight_insurance_fi_order_journey_203_csv_gz1579083287830

Error Log:

2020-01-15 10:15:13,486 @S10_111_12_21 @NULL_APPNAME -ERROR admin.tkp_flight_insurance_bq-0 com.striim.bigquery.BigQueryIntegrationTask.execute (BigQueryIntegrationTask.java:159) Failed to upload data from /opt/striim/.striim/admin/tkp_flight_insurance_bq/raw_cdc.tkp_flight_insurance_fi_order_journey/raw_cdc.tkp_flight_insurance_fi_order_journey_203.csv.gz
com.webaction.common.exc.AdapterException: WA-100 : Unexpected Exception. Cause: Upload job failed. Reason: BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124} Execution errors: [BigQueryError{reason=internalError, location=null, message=An internal error occurred and the request could not be completed. Error: 4663124}] Job: Striim_BigQueryWriter_raw_cdc_tkp_flight_insurance_fi_order_journey_203_csv_gz1579083287830 File: /opt/striim/.striim/admin/tkp_flight_insurance_bq/raw_cdc.tkp_flight_insurance_fi_order_journey/raw_cdc.tkp_flight_insurance_fi_order_journey_203.csv.gz Target Table: raw_cdc.tkp_flight_insurance_fi_order_journey

There is no response for the same ticket here - https://issuetracker.google.com/issues/148018440

Sync sample dependencies

The samples directory is behind the rest of the repo in dependency versions. Let's figure out how to share dependency versions between these.

Exception while describing dataset through SDK when allUsers access has been set

Initially reported:
https://issuetracker.google.com/issues/149630959

copied report from the issuetracker:

Through the console, we have created datasets and provided the access to it as "allUsers".

We are currently using the google-cloud-bigquery version 1.102.0. And facing an exception when we try to call the describe method (i.e: client.getDataset(datasetId)).

This is the exception which is being thrown:
Method threw 'com.google.cloud.bigquery.BigQueryException' exception. Cannot evaluate com.google.cloud.bigquery.Dataset.toString()

I believe this is because the class Group, which extends Acl.Entity, has only the following values defined:

private static final String PROJECT_OWNERS = "projectOwners";
private static final String PROJECT_READERS = "projectReaders";
private static final String PROJECT_WRITERS = "projectWriters";
private static final String ALL_AUTHENTICATED_USERS = "allAuthenticatedUsers";

We are not facing a problem when the ACL has been set to any of the other 4 values present above, only when allUsers has been selected

Any idea if this is a known bug, and if so in which version can we expect a fix for it?

Commentary from shollyman

We should verify how the java library is handling parsing of the access field in the dataset resource. This may be another instance of the same issue we saw in go:

googleapis/google-cloud-go#1658
https://code-review.googlesource.com/c/gocloud/+/48291

BigQuery.TableField does not contain RANGE_PARTITIONING

Any additional information below

Can't request tables.list(BigQuery.TableField.RANGE_PARTITIONING) because it's not in the Java API, despite being on the server side.

Following these steps guarantees the quickest resolution possible.

<3 but I think this one is pretty self-explanatory. See also #19 for why this is a painful bug.

BigQuery: Java - Implement Integer Range Partitioning

This is a tracking FR for implementing the Integer Range Partitioning functionality within the
BigQuery API.

Integer range partitioning supports partitioning a table based on the values in a specified integer column, and divides the table using start, stop, and interval values. A table can only support a single partitioning specification, so users can choose either integer partitioning or time-based partitioning, but not both. Clustering is independent of partitioning and works with both.

More implementation details about this API are shared directly with library
maintainers. Please contact shollyman for questions.

Bigquery Stream Insert Missing Data

I have been running testing for inserting data into Google Bigquery, and I have experienced data missing in my testing. Below is the detail about this issue.
Testing scenario: Insert data into Bigquery using the latest (v1.88.0) streaming API.
Code:

public Map<Long, List<BigQueryError>> performWriteRequest(TableId tableId,
                                                          List<InsertAllRequest.RowToInsert> rows) {
    InsertAllRequest request = createInsertAllRequest(tableId, rows);
    InsertAllResponse writeResponse = bigQuery.insertAll(request);
    if (writeResponse.hasErrors()) {
        System.out.println("Error inserting into BQ"); 
        return writeResponse.getInsertErrors();
    } else {
        logger.debug("table insertion completed with no reported errors"); 
        return new HashMap<>();
    }
}

Testing information:

  • Each round the program calls Bigquery 10K - 20K times.
  • Each Bigquery.insertAll request only inserts 1 row (some rounds also insert 100 rows per request), without using rowid to do deduplication, so rows should not be filtered.
  • Each round takes about 5 minutes so it is not possible to hit any quota limitation (data is small).
  • Totally tested 30-40 rounds.
  • Some rounds use exactly the same data, others use different data.
  • I did not drop the table and recreate, always use the new table with a different name.
  • In the final round of the test, the program also pushed data to GCS and dump to a local file as well for comparison.

Observations:

  • In about 5-10 rounds I see the data missing. No pattern shows when and what data might be missing, kind like random.
  • No error returned from the Bigquery.insertall, from the client-side, all requests were successfully executed.
  • There is a retry policy in the program, but it is never triggered since no error returned.
  • Important: For new tables, I observed that the estimated number of rows in Bigquery stream buffer equals with the number of rows that I inserted, also equals with the number of rows in the file pushed to GCS(last round testing). But using select count(), I get fewer rows (about 1 missing in every 10-20K requests). After a while, the stream buffer info will be gone and the number of rows showed in the table info is the same with select count(), which is smaller than it is supposed to be.

I understand that the stream buffer only provides an estimated number, and we should not trust it, but every time after the data push, the number is exactly the same with the number of rows being pushed. Maybe this suggests that Bigquery received the data but finally dropped it for some unknown reason? This might be a bug.

This issue might related to #7433, #876, googleapis/google-cloud-java#3344 , and #3822.

[Code sample feedback] The method getService() is undefined for the type BigQueryOptions

original ticket: b/143921331

copy from buganizer

User feedback report:
https://listnr.corp.google.com/report/86627433558

URL: https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries#bigquery_simple_app_all-java
User Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36
Platform: Win32
UI Language: en

Description:
The method getService() is undefined for the type BigQueryOptions

Getting this error. Have all the jars and correct imports.

comment from Steph

Please try to reproduce first.

JUnit none() is deprecated

[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/OptionTest.java:[39,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/TableDataWriteChannelTest.java:[70,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/testing/RemoteBigQueryHelperTest.java:[68,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/BigQueryImplTest.java:[432,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/BigQueryOptionsTest.java:[26,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/TimePartitioningTest.java:[40,60] none() in org.junit.rules.ExpectedException has been deprecated
[WARNING] /home/elharo/java-bigquery/google-cloud-bigquery/src/test/java/com/google/cloud/bigquery/JobTest.java:[88,66] none() in org.junit.rules.ExpectedException has been deprecated

Reformat plugin is misbehaving with spaces

The autoformat plugin is formatting code like this:

    expect(
            bigqueryRpcMock.open(
                new com.google.api.services.bigquery.model.Job()
                    .setJobReference(JOB_INFO.getJobId().toPb())
                    .setConfiguration(LOAD_CONFIGURATION.toPb())))
        .andThrow(ex);

That is, it is indenting code eight spaces on continuation lines, possibly in concert with Eclipse. I'm not sure why.

BigQuery.listJobs is throwing NPE

Follow on from #17 (which still exists): listJobs is ALSO throwing NPE:

java.lang.NullPointerException: null
        at com.google.cloud.bigquery.JobStatistics.fromPb(JobStatistics.java:1183)
        at com.google.cloud.bigquery.JobInfo$BuilderImpl.<init>(JobInfo.java:154)
        at com.google.cloud.bigquery.Job.fromPb(Job.java:485)
        at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1127)
        at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1124)
        at com.google.common.collect.Iterators$6.transform(Iterators.java:783)
        at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)

This is clearly dependent on the data the server is sending; in your tests you are not reproducing the server-side situation that we have; I don't know what that server side situation is, but I know it exists. In both this case and case #17, the client needs to be much more defensively programmed.

    JobConfiguration jobConfigPb = jobPb.getConfiguration(); // RETURNS NULL
    com.google.api.services.bigquery.model.JobStatistics statisticPb = jobPb.getStatistics();
    if (jobConfigPb.getLoad() != null) { // NPE

Create a backend wrapper to automatically update metadata when calling a load job

Customer Pain: Customer has to call setMetadata() function every time he wants to update table's metadata.

Issue summary: Problem encountered:

When calling a load job using Java Client Library for BigQuery 1 the metadata is only set at table creation, but not updated on recurrent calls to the function. In order to update it it's necessary to call setMetadata() function 2.

What you expected to happen:

It would be great if a backend wrapper that triggers setMetadata() when calling load() function to allow updating metadata once a BigQuery table is already created.

Allow setting scopes to access datasets backed by Google Sheets

Is your feature request related to a problem? Please describe.
When using the tool StreamSets, we receive authorization errors when attempting to access a table in a dataset which is backed by a Google Sheets document in Google Drive. The authorization error indicates that the Google Drive scope was not requested, and that seems to not be something that can be added based on the BigQueryOptions code. The target sheet has been shared with the service account user, but without the requested scope, we are unable to programatically access the data.

Describe the solution you'd like
Ability to request additional scopes when building BigQueryOptions.

Additional context
Error text in StreamSets:

com.streamsets.pipeline.api.StageException: BIGQUERY_02 - Query Job execution error: 'BigQueryError{reason=invalid, location=1vvgwO1QfOdP0RODhx9li5pgHKTGtGVzFdXqiZbiTJLk, message=Error while reading table: sheets_integration.page_cat_clean, error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.}

image

Mocking Bigquery Dataset

I have the following piece of code where I am first checking if the dataset exists or not and if the dataset doesn't exist, I create the dataset.

if (!bigquery.getDataset(tableID.getDataset()).exists()) { bigquery.create(DatasetInfo.of(tableID.getDataset())); }

I am having difficulty in mocking the call bigquery.getDataset(tableID.getDataset()).
I am currently using Junit 4 and I am not able to mock the call.

If I do
when(bigquery.getDataset(tableId.getDataset())).thenReturn(new Dataset.Builder(bigquery, tableId.getDataset()));
then there is no way to create a Dataset in test. The Builder constructors are not public and we cannot build Dataset from DatasetInfo.

In this case, we cannot directly mock the call to getDataset.

Please guide as to how we can mock the call to getDataset.

Integration tests break mvn verify (without additional, undocumented setup)

With the inclusion of the samples into the repo, a simple clone and mvn verify no longer works. Undoubtedly there's some extra setup required. Ideally there shouldn;t be, but if this is mandatory then the needed steps need to be clearly documented in the relevant files in the repo.

[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.002 s <<< FAILURE! - in com.google.cloud.bigquery.it.ITBigQueryTest
[ERROR] com.google.cloud.bigquery.it.ITBigQueryTest  Time elapsed: 1.001 s  <<< ERROR!
com.google.cloud.storage.StorageException: 401 Unauthorized
	at com.google.cloud.bigquery.it.ITBigQueryTest.beforeClass(ITBigQueryTest.java:290)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 401 Unauthorized
	at com.google.cloud.bigquery.it.ITBigQueryTest.beforeClass(ITBigQueryTest.java:290)

[ERROR] com.google.cloud.bigquery.it.ITBigQueryTest  Time elapsed: 1.002 s  <<< ERROR!
com.google.cloud.bigquery.BigQueryException: Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
	at com.google.cloud.bigquery.it.ITBigQueryTest.afterClass(ITBigQueryTest.java:323)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 
401 Unauthorized
{
  "code" : 401,
  "errors" : [ {
    "domain" : "global",
    "location" : "Authorization",
    "locationType" : "header",
    "message" : "Login Required.",
    "reason" : "required"
  } ],
  "message" : "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
  "status" : "UNAUTHENTICATED"
}
	at com.google.cloud.bigquery.it.ITBigQueryTest.afterClass(ITBigQueryTest.java:323)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR] com.google.cloud.bigquery.it.ITBigQueryTest.com.google.cloud.bigquery.it.ITBigQueryTest
[ERROR]   Run 1: ITBigQueryTest.beforeClass:290 » Storage 401 Unauthorized
[ERROR]   Run 2: ITBigQueryTest.afterClass:323 » BigQuery Request is missing required authentic...
[INFO] 
[INFO] 
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
[INFO] 

BigQuery: FieldValue.getTimestamp() use of Double causes precision loss in some cases.

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
    General, Core, and Other are also allowed as types
  2. OS type and version: any
  3. Java version: any
  4. google-cloud-java version(s): 1.96.0

Steps to reproduce

Create a row with this timestamp: 1337-09-02 11:43:21.622894 UTC, query and use fieldValue.getTimestamp(). Microseconds wouldn't be equal.

Code example

@test
public void testFloatingPointPrecisionLoss() {
FieldValue fieldValue = FieldValue.of(Attribute.PRIMITIVE, "-1.9954383398377106E10");
long received = fieldValue.getTimestampValue();
long expected = -19954383398377106L;
assertEquals(expected, received);
}

Any additional information below

Cause - floating point conversation causes it; need to use BigDecimal (or equivalent) to perform lossless conversion.

BigQuery.JobListOption.parentJobId does not seem to work

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
    General, Core, and Other are also allowed as types
  2. OS type and version: Linux, Ubuntu 19.10
  3. Java version: 1.8.0_232-ea
  4. google-cloud-java version(s): 1.101.1

Steps to reproduce

Choose a job id with child jobs.
Run the below.

  1. Client lists ALL jobs, not just the (in my case, 3) children of the parent job. It looks as if the JobListOption.parentJobId is ignored.

Code example

    @Test
    public void testListChildren() throws Exception {
        String jobId = // ...
        BigQuery bigQuery = BigQueryOptions.getDefaultInstance().getService();
        Iterable<? extends Job> jobChildren = bigQuery.listJobs(
                BigQuery.JobListOption.parentJobId(jobId)
        ).iterateAll();
        for (Job job : jobChildren) {
            job = job.reload(BigQuery.JobOption.fields(BigQuery.JobField.CONFIGURATION, BigQuery.JobField.ID, BigQuery.JobField.STATISTICS, BigQuery.JobField.STATUS));
            LOG.debug("Child job: " + job.getJobId());
            LOG.debug("Child's parent is " + job.getStatistics().getParentJobId());
        }
    }

Any additional information below

(Edited: Calling reload() causes the job to show a parentId, but still lists jobs NOT under the given parent)

Synthesis failed for java-bigquery

Hello! Autosynth couldn't regenerate java-bigquery. 💔

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:6aec9c34db0e4be221cdaf6faba27bdc07cfea846808b3d3b964dfce3a9a0f9b
Status: Image is up to date for googleapis/artman:latest
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
    main()
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
    spec.loader.exec_module(synth_module)  # type: ignore
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
  File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 23, in <module>
    templates = common_templates.java_library()
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 75, in java_library
    return self._generic_library("java_library", **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 43, in _generic_library
    if not kwargs["metadata"]["samples"]:
KeyError: 'samples'

Synthesis failed

Google internal developers can see the full log here.

NPE when we tried to read the Dataobjects using BigQuery API

Thanks for stopping by to let us know something could be better!

PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.

Please run down the following list and make sure you've tried the usual "quick fixes":

If you are still having issues, please be sure to include as much information as possible:

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
    General, Core, and Other are also allowed as types
  2. OS type and version: ANY
  3. Java version: 1.8
  4. google-cloud-java version(s):
    com.google.cloud
    google-cloud-bigquery
    1.110.0

Steps to reproduce

NOT able to reproduce

Code example

  Page<Dataset> datasets = bigquery.listDatasets(project.toLowerCase(), BigQuery.DatasetListOption.pageSize(100));
  Iterable<Dataset> dataset_iterator = datasets.iterateAll();
  for (Dataset dataset : dataset_iterator {
             System.out.println("Getting tables for DatasetId " + dataset.getDatasetId());
             Page<Table> tables = bigquery.listTables(datasetId, BigQuery.TableListOption.pageSize(100));
             Iterable<Table> table_iterator = tables.iterateAll();
             Iterator<Table> tables_iterator = table_iterator.iterator();
             while (tables_iterator.hasNext()) {
                  try {
                         Table table =  tables_iterator.next();
                        System.out.println(table.toString());
                        System.out.println(table.getTableId().getTable());
                        System.out.println(table.getDefinition().getType());

                        if (table.getDefinition().getType().toString().equals("TABLE")) {
				System.out.println("table");
			}
	          } catch (Exception e) {
                     e.printStackTrace();
                  }
	     }
   }

Stack trace

java.lang.NullPointerException: Name is null
	at java.lang.Enum.valueOf(Enum.java:236)
	at com.google.cloud.bigquery.TimePartitioning$Type.valueOf(TimePartitioning.java:43)
	at com.google.cloud.bigquery.TimePartitioning.fromPb(TimePartitioning.java:145)
	at com.google.cloud.bigquery.StandardTableDefinition.fromPb(StandardTableDefinition.java:275)
	at com.google.cloud.bigquery.TableDefinition.fromPb(TableDefinition.java:151)
	at com.google.cloud.bigquery.TableInfo$BuilderImpl.<init>(TableInfo.java:188)
	at com.google.cloud.bigquery.Table.fromPb(Table.java:624)
	at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:847)
	at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:844)
	at com.google.common.collect.Iterators$6.transform(Iterators.java:820)
	at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
	at com.google.cloud.PageImpl$PageIterator.computeNext(PageImpl.java:72)
	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:145)
	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:140)
	at com.google.common.collect.AbstractIterator.next(AbstractIterator.java:156)

External references such as API reference guides used

  • ?

Any additional information below

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

BigQuery: Page<Table>.iterateAll() iterator throws IllegalArgumentException

Environment details

  1. Specify the API at the beginning of the title. For example, "BigQuery: ...").
    General, Core, and Other are also allowed as types
  2. OS type and version: Linux (at customer site, version unknown)
  3. Java version: OpenJDK 8
  4. bigquery version(s): google-cloud-bigquery-1.102.0.jar

Steps to reproduce

  1. find a dataset containing Materialized Views
  2. attempt to iterate over the tables in the dataset

Code example

// example
Page<Table> tables = dataset.list(BigQuery.TableListOption.pageSize(100));
for (Table table : tables.iterateAll()) {
 ...
}

Stack trace

java.lang.IllegalArgumentException: Format MATERIALIZED_VIEW is not supported
        at com.google.cloud.bigquery.TableDefinition.fromPb(TableDefinition.java:159)
        at com.google.cloud.bigquery.TableInfo$BuilderImpl.<init>(TableInfo.java:188)
        at com.google.cloud.bigquery.Table.fromPb(Table.java:624)
        at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:842)
        at com.google.cloud.bigquery.BigQueryImpl$21.apply(BigQueryImpl.java:839)
        at com.google.common.collect.Iterators$6.transform(Iterators.java:783)
        at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)
        at com.google.cloud.PageImpl$PageIterator.computeNext(PageImpl.java:72)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136)

External references such as API reference guides

Any additional information below

We understand Materialized Views are not in wide release, and are attempting to get them enabled for one of our projects so we can reproduce locally.

BigQuery: after some amount of time/write jobs, all future write jobs fail with socket closed

Environment details

  1. Specify the API at the beginning of the title.
  2. OS type and version: Mac OSX 10.14.6
  3. Java version: OpenJDK 1.8.0_232
  4. bigquery version(s): 1.110.0

Steps to reproduce

  1. Run a program which uploads periodically over a period of time.
  2. Eventually, the upload fails with Socket closed for all uploads after that point.

Code example

private val bigQuery = BigQueryOptions.newBuilder()
        .setCredentials(
                GoogleCredentials.fromStream(credentialConfig)
        )
        .build()
        .service

private val tableId = TableId.of("...", "...")
private val writeConfig =
        WriteChannelConfiguration.newBuilder(tableId)
                .setFormatOptions(FormatOptions.csv())
                .setNullMarker("null")
                .build()

private val bqUploadingContext = newFixedThreadPoolContext(4, "bq-uploading")

// outputCSV is a file path
suspend fun uploadToBigQuery(outputCSV: String): Boolean {
    val writer = bigQuery.writer(writeConfig)
    // Write data to writer
    Channels.newOutputStream(writer).use {
        Files.copy(File(outputCSV), it)
    }
    writer.close()

    // Get load job
    val job = withContext(bqUploadingContext) { writer.job.waitFor() }
    // check the Job statistics, see if there were any errors, print how long the job took...
    return success
}

Stack trace

com.google.cloud.bigquery.BigQueryException: Socket closed
	at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:106)
	at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.open(HttpBigQueryRpc.java:604)
	at com.google.cloud.bigquery.TableDataWriteChannel$2.call(TableDataWriteChannel.java:87)
	at com.google.cloud.bigquery.TableDataWriteChannel$2.call(TableDataWriteChannel.java:82)
	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
	at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
	at com.google.cloud.bigquery.TableDataWriteChannel.open(TableDataWriteChannel.java:81)
	at com.google.cloud.bigquery.TableDataWriteChannel.<init>(TableDataWriteChannel.java:41)
	at com.google.cloud.bigquery.BigQueryImpl.writer(BigQueryImpl.java:1249)
	at com.google.cloud.bigquery.BigQueryImpl.writer(BigQueryImpl.java:1240)
	at uploadToBigQuery // on the line: val writer = bigQuery.writer(writeConfig)

Any additional information below

I have a program which I'm running from IntelliJ which downloads some data, processes into a CSV, then uploads it to BigQuery. I have simplified the source code to the relevant stuff. bigQuery is initialized once at the start of the program and reused for the duration. Despite the context having 4 threads, these uploads are all happening in serial for now. The CSVs being uploaded are ~9MB and ~40k rows per upload.

The uploading code is mostly inspired by the Java example given here: https://cloud.google.com/bigquery/docs/loading-data-local#loading_data_from_a_local_data_source

However, if I run this program for a while, it will eventually crash with the Socket closed exception described above. If I catch the exception and try again, it will fail with the Socket closed exception again. I also tried recreating the bigQuery object by re-instantiating it, but it failed with something to the effect of "Failed to refresh access token: Socket closed". On my latest run, it took ~5300 seconds and 231 uploads to happen. The uploads are roughly equally spread out across the run time. This happens consistently when I run my program for an extended period of time.

Thanks for any help. Let me know if there is an issue with my uploading process that could help fix this issue or if there is any other information needed.

Update:

Ran two more times on after updating to 1.110.1 and both ran for ~5000s and had 231 successful uploads as well before erroring as described above.

Bigquery: expose requirePartitionFilter field in Table.

  • As of now requirePartitionFilter field available in TimePartitioning but there is an another
    type of partition available which is RangePartitioning and it's already implemented.
    so we will have to expose requirePartitionFilter at top level in the table.
  • The existing requirePartitionFilter field will be deprecated in TimePartitioning.

BigQuery listJobs throws NullPointerException

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
    General, Core, and Other are also allowed as types
  2. OS type and version: macOS Catalina 10.15.2
  3. Java version: jdk1.8.0_211
  4. google-cloud-java version(s):
    "com.google.cloud" % "google-cloud-bigquery" % "1.102.0"

Code example

bigquery.listJobs(JobListOption.allUsers()).iterateAll().asScala.foreach(job => println(job.toString))

Stack trace

Exception in thread "main" java.lang.NullPointerException
	at com.google.cloud.bigquery.JobStatistics.fromPb(JobStatistics.java:1183)
	at com.google.cloud.bigquery.JobInfo$BuilderImpl.<init>(JobInfo.java:154)
	at com.google.cloud.bigquery.Job.fromPb(Job.java:485)
	at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1127)
	at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1124)
	at com.google.common.collect.Iterators$6.transform(Iterators.java:786)
	at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)
	at com.google.cloud.PageImpl$PageIterator.computeNext(PageImpl.java:72)
	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136)
	at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:39)

BigQuery: listJobs is throwing IAE

Thanks for stopping by to let us know something could be better!

PLEASE READ: If you have a support contract with Google, please create an issue in the support console instead of filing on GitHub. This will ensure a timely response.

Please run down the following list and make sure you've tried the usual "quick fixes":

If you are still having issues, please be sure to include as much information as possible:

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
  2. OS type and version: Ubuntu 19.04
  3. Java version: openjdk version "1.8.0_222"
  4. google-cloud-java version(s): com.google.cloud:google-cloud-bigquery:1.96.0

Steps to reproduce

Happens about 100 times while listing 65,000 jobs on a project.

Code example

bq.listJobs(...);
for (Job job : page.getValues()) { // Throws.
}

Stack trace

java.lang.IllegalArgumentException: Provided dataset is null or empty
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
        at com.google.cloud.bigquery.TableId.<init>(TableId.java:67)
        at com.google.cloud.bigquery.TableId.fromPb(TableId.java:109)
        at com.google.cloud.bigquery.QueryJobConfiguration$Builder.<init>(QueryJobConfiguration.java:178)
        at com.google.cloud.bigquery.QueryJobConfiguration$Builder.<init>(QueryJobConfiguration.java:89)
        at com.google.cloud.bigquery.QueryJobConfiguration.fromPb(QueryJobConfiguration.java:903)
        at com.google.cloud.bigquery.JobConfiguration.fromPb(JobConfiguration.java:128)
        at com.google.cloud.bigquery.JobInfo$BuilderImpl.<init>(JobInfo.java:158)
        at com.google.cloud.bigquery.Job.fromPb(Job.java:485)
        at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1127)
        at com.google.cloud.bigquery.BigQueryImpl$32.apply(BigQueryImpl.java:1124)
        at com.google.common.collect.Iterators$6.transform(Iterators.java:786)
        at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)

External references such as API reference guides used

  • ?

Any additional information below

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

Synthesis failed for java-bigquery

Hello! Autosynth couldn't regenerate java-bigquery. 💔

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
    main()
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
    last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
    text=True,
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
    with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'

Google internal developers can see the full log here.

Warning when building samples

[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.example.bigquery:bigquery-google-cloud-samples:jar:1.0.11
[WARNING] 'parent.relativePath' of POM com.example.bigquery:bigquery-google-cloud-samples:1.0.11 (/home/elharo/java-bigquery/samples/pom.xml) points at com.google.cloud:google-cloud-bigquery-parent instead of com.google.cloud.samples:shared-configuration, please verify your project structure @ line 26, column 11
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.