Comments (4)
Replacing a table as a single operation would replace all metadata.
Does this have to be true?
if CTAS C at t0 (Ct_0) is a deterministic operation against a snapshot of a dataset D at t0 (Dt_0), then isn't it possible to preserve metadata whenever the materialization program PC (e.g. a version of a SQL/PIG query or spark program) does not change, propagating only those changes from any upstream data set?
e.g. if Dt_1's metadata differs in a way that doesn't change the data (e.g. renames a column), Ct_1 can be considered to use the same materialization program as Ct_0 specifically because it references ID's and not names in its transformations.
In other words, if
PCt_0 = PCt_1 = PC_0
is true, then table schema metadata for the iceberg table representing C can be preserved.
In addition, (I think) it is at minimum plausible to say:
PCt_0 = PCt_1
even in the case of specific data-affecting metadata changes, for example a type promotion from short to long integer (but NOT in the case of a type demotion, which might cause PCt_0 to be unsafe to apply to Dt_1)
Seemingly the only time the metadata could not be preserved would be when the relational reference / equivalent of a FROM
statement is changed such that metadata is only replaced if the data lineage cannot be considered equivalent from one version of the materialization program to the next.
As for what I'm getting at, I'd like to offer that while metadata replacement is a sensible default operation for CTAS, I think it would be useful to leave room for the possibility that CTAS could be treated as a materialized view, and therefore that metadata replacement is one of multiple reasonable behaviors for table-replacing operations.
from iceberg.
@mike-weinberg: no, it doesn't have to be true. That's why there are multiple operations: insert (do not change table metadata), replace/overwrite data (overwrite some/all data, then insert as a single operation), CTAS (create table and insert in one operation) and RTAS (drop, create, and insert in one operation).
Those cover different use cases. If you want to preserve existing metadata, then you should insert or replace data. You would only replace a table if you specifically do not care about compatibility with the previous schema. For example, if you create a report table every day and you want to update the query that produces it, you probably don't want to make DDL changes and then run a job that replaces the contents. You just want to replace whatever is there with a new version using new version's schema.
For more context, check out the logical plans we are proposing for Spark in the Standardize Logical Plans SPIP.
from iceberg.
@rdblue thanks for the added context. Metadata history and lineage has been on my mind for a while so I guess my thinking and preferences are biased, but I see what you mean.
from iceberg.
This was added in d2bf002.
from iceberg.
Related Issues (20)
- Allow Specifying Partitioning Function for External Mappings HOT 1
- Document Spark Issues Affecting the Iceberg Implementation
- Snapshot-Level Metrics and Statistics
- Cannot convert unknown primitive type: required int96 `timestamp` HOT 2
- Add More Documentation HOT 4
- Custom metadata in data files
- Custom InputFile / OutputFile providers for Spark HOT 4
- Support for AWS Glue as an alternative Hive metastore implementation HOT 5
- .travis.yml: The 'sudo' tag is now deprecated in Travis CI
- Port to Azure or Google Cloud?
- Any way to integrate Iceberg with AWS EMR Glue metastore?
- Problem inserting data into a table with structs (iceberg-spark)
- java.lang.AbstractMethodError: Method org/apache/iceberg/spark/source/SparkTable.newWriteBuilder HOT 1
- Long wait time for retry when refreshing table metadata HOT 1
- Support Customizing The Location Of Data Files Written By The Spark Data Source HOT 3
- Upgrade to Spark 2.4.0 HOT 2
- iceberg-runtime jar on JitPack is empty HOT 4
- Snapshot cryptographic integrity HOT 1
- Replace Literals with Stateless Functions
- Predicate Compiler for Evaluators
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from iceberg.