Comments (5)
Hi,
I've just ran into exactly same problem (my job was hanging on the export). I found this issue and as a workaround I tried to explicitly disable sharded export by doing:
hadoopConf.setBoolean(BigQueryConfiguration.ENABLE_SHARDED_EXPORT_KEY, false);
and to my surprise I got an error like this:
java.io.IOException: Cannot read and write in different locations: source: EU, destination: US
at com.google.cloud.hadoop.io.bigquery.BigQueryUtils.waitForJobCompletion(BigQueryUtils.java:97)
at com.google.cloud.hadoop.io.bigquery.UnshardedExportToCloudStorage.waitForUsableMapReduceInput(UnshardedExportToCloudStorage.java:87)
so what I did is that I fixed my input and outputs to be in the same location and then everything worked fine with sharded export enabled, no workaround needed.
This leads me to conclusion that this behaviour only occurs with cross-location exports. This is obviously bug in the connector, as it should throw an exception instead of hanging.
from hadoop-connectors.
@rtshadow - thank you for bringing up that case. I've opened an internal bug to track that case (have the DynamicRecordReader monitor export status).
from hadoop-connectors.
Any update on this issue? I am experiencing similar behavior resulting in the DynamicFileListRecordReader
instance hanging indefinitely.
If I try to export an unpartitioned table with a few records and BigQueryConfiguration.ENABLE_SHARDED_EXPORT_KEY
is set to true
, the 0 record file is never created and the record reader continues to wait for another resource.
Moreover, the DynamicFileListRecordReader
exhibits the same behavior when exporting data from partitioned tables, even when using wildcards in the uri. So, AFAICT, this is a bug and it is does not behave as advertised.
10:26:47.403 [LocalJobRunner Map Task Executor #0] INFO c.g.c.h.i.b.DynamicFileListRecordReader - Initializing DynamicFileListRecordReader with split 'InputSplit:: length:5 locations: [] toString(): gs://<elided>/shard-0/data-*.json[5 estimated records]', task context 'TaskAttemptContext:: TaskAttemptID:attempt_local949659226_0001_m_000000_0 Status:'
The only way I can successfully use the BigQuery input format is if BigQueryConfiguration.ENABLE_SHARDED_EXPORT_KEY
is set to false
, which is undesirable.
from hadoop-connectors.
Hi, I use the spotify spark-bigquery connector (https://github.com/spotify/spark-bigquery) in version 0.2.2-s_2.11
which uses bigdata-interop in version 0.10.2-haddop2
. First I load meta data from dataset.__TABLES__
and then the actual data. Since some days I face the same problem mentioned here. The job hangs at DynamicFileListRecordReader
when I want to load the (pretty small) meta table from BigQuery. Loading the actual "data" tables works as before.
Since I haven't change any lib versions or code in my application, it seems that there has something changed in BigQuery or Cloud Storage itself?
from hadoop-connectors.
I think the issue is that BigQuery does not create the 0 record file when using a single wildcard, which is why there is MIN_SHARDS_FOR_SHARDED_EXPORT = 2.
However mapreduce.job.maps=1 (like in local mode) used to overrule that before ea2bb01.
Closing since this should the minimum should now be enforced, and BigQuery should always create the 0 byte files.
from hadoop-connectors.
Related Issues (20)
- Test failures after HADOOP-18724
- Question: How to use gcs-connector on GKE with Workload Identity HOT 1
- BQ storage libray blocked on update to grpc v1.56 HOT 1
- GoogleCloudStorageFileSystem#delete recursive does not page
- Memory issues while running Apache Spark streaming applications on Google Dataproc cluster | OutOfMemoryError Java heap space
- flumk sink hdfs to gcs, all gcs write thread blocked
- how to transfer file from local to gcs bucket using dataproc hadoop in intellij
- GCS Connector fails with StackOverflowError during accessing hadoop credentials
- GhfsStorageStatistics cannot be cast ERROR HOT 9
- Support disabling automatic decompression of gzip files in GCS connector
- gcs-connector 3.0 not working with pyspark HOT 5
- gcs-connector:3.0.0 failing due to certificate when accessing to GCS from Github runner with WIF configuration HOT 7
- Feature request: automatic identity deduction a la google.auth.default()
- gcs-connector-3.0.0-shaded CVEs HOT 1
- How can I sink GCS connector metrics into GCP Cloud Monitor? HOT 2
- globStatus should prioritize server-side filtering over listing all files and performing local matches
- Conversion from InputStream -> ByteBuffer on gRPC writes creates many byte[] allocations. HOT 2
- Bug in `GoogleCloudStorageReadChannel` can cause an infinite loop
- hadoop3-2.2.22 and hadoop3-2.2.23 throws NoSuchMethodError at ServiceOptions.getService
- gcs-connector- CVE
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hadoop-connectors.