Comments (4)
Hello Sašo ,
yeah in ODBC there is a distinction between no Result Set and and empty Result Set. Typically you would only get no result set if the type of SQL statement does not generate Results (e.g. INSERT). Ultimately driver implementations differ here.
Is there a way to prevent output file to be generated if the returned number of rows from a query is 0?
Currently there is not. Seems reasonably to me though. I wonder even if this should be the default. Could you also help me to understand the context better? Do you use odbc2parquet
as part of an automated data pipeline? Why are empty files bad for you?.
Best regards,
Markus
from odbc2parquet.
odbc2parquet 0.15.0
has been released featuring the --no-empty-file
flag.
from odbc2parquet.
Hi Markus,
Currently there is not. Seems reasonably to me though. I wonder even if this should be the default. Could you also help me to understand the context better? Do you use odbc2parquet as part of an automated data pipeline? Why are empty files bad for you?.
I plan to use odbc2parquet in a data pipeline export process. The data from an on-premise ERP system will be exported to parquet files, uploaded to a data lake and imported to a cloud data warehousing solution. We are using "change data capture" functionality of SQL Server to export the new/modified data in quite high frequency (every 30 minutes). The CDC creates specific functions/views for each table that return the delta records only - in case of no changes in the table, an empty result set is returned (schema, but no rows).
Every 30 minutes we get new data in around 50 tables - other tables change infrequently. The total number of tables that we export is around 300. So, at the moment we get ca. 250 empty files and 50 files with data generated every 30 minutes. Basically, the solution works, but it's not very practical to get all those empty files (unneccesary uploads, unneeded processing, etc.). I could do additional check before upload and delete the empty files - but that's additional step that I'd like to avoid.
So, I've seen you've already released a new version with the additional option flag added. Amazing! Thank you very much!
Have a nice Christmas!
Regards,
Sašo
from odbc2parquet.
Hello Sašo,
thanks for your detailed description of the use case. This truly helps to make odbc2parquet
better. Something which sometimes is helpful if running odbc2parquet
in a pipeline, is that it can also stream its output to stdout
. This allows for your data pipeline sometimes to be expressed with a pipe in the console. It does not sound to me like you would have use for it right now, but things always change in our line of work.
I could do additional check before upload and delete the empty files - but that's additional step that I'd like to avoid.
I guess that step exists now anyway, just within odbc2parquet.
So, I've seen you've already released a new version with the additional option flag added. Amazing! Thank you very much!
You are welcome.
Have a nice Christmas!
And a nice Christmas to you, too!
Regards,
Markus
from odbc2parquet.
Related Issues (20)
- Issue with MySQL JSON columns HOT 8
- Reserved Column Names not Supported HOT 1
- Feature Request - Support column encryption in the generated parquet file HOT 4
- JobName as .sql file in config file HOT 4
- Parquet format version support HOT 9
- Feature suggestion: connect to URL `postgresql://username:pass@host/database` HOT 1
- What permissions are needed? - State: 42501, Native error: 1, Message: ERROR: permission denied HOT 4
- StarRocks parquet file import of parquet file generated by odbc2parquet fails with encoding error HOT 11
- Memory allocation with column-length-limit HOT 11
- Build for alpine HOT 8
- file-size-threshold generates wrong size files HOT 1
- --no-empty-file option doesn't work properly when row-groups-per-file should devide result into few files HOT 6
- MSSQL nvarchar - missing column in output file HOT 2
- Feature request: Progress bar for full table copies HOT 6
- Data source must return valid UTF16 in wide character buffer: Utf16Error HOT 4
- Write statistics HOT 14
- Make zstd the default compression HOT 4
- Build release assets for Ubuntu ARM64 as well HOT 11
- Exporter adding trailing zero's in when exporting from PostgreSQL Numeric dtype HOT 5
- thread 'main' panicked at src/query/date.rs:60:87: called `Option::unwrap()` on a `None` value HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from odbc2parquet.