Giter Site home page Giter Site logo

svjack / pyarrowexpressioncasttoolkit Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 0.0 29 KB

A small cast tookit class drived from _ParquetDatasetV2 to support cast in filters argument

Python 100.00%
pyarrow arrow pandas partitions dataset log tookit dtype conditions

pyarrowexpressioncasttoolkit's Introduction


PyArrowExpressionCastToolkit

A small cast tookit class drived from _ParquetDatasetV2 to support cast in filters argument


View Code with Demo

Table of Contents

About The Project

PyArrow 2.0.0 introduce Expression into its dataset API, which can used as config filters to ParquetDataset, to perform pre-filter before read in partition level. the actual expression construction of ParquetDataset can be found in filters_to_expression in parquet.py it use field (Expression) to have a formal representation. Expression have cast as its method, so a strightforword infer of the support function is that, One can have a comparison between a field and a object by cast the field into the type of object. For Example, we can compare with pandas datatime :

                ("time_two_pos", ">", pd.to_datetime("1970-01-01 00:24:01.200000001"))

by cast "time_two_pos" field into timestamp, when use time_two_pos as partition key. But it seems not work in the 2.0.0 version.

This project overwrite the ParquetDatasetV2 class expression construction process to have a type level transformation from partitions string to partition cast objects and compare the cast object with another in the pandas space. filter out cast objects and transform them back into partition expression (reduced by some bool combination) to have a final_expression, and apply it to ParquetDatasetV2 filters argument to well-config it.

Built With

Getting Started

Installation

  • pip
pip install pyarrow==2.0.0

Usage

  1. Download example Data from kaggle https://www.kaggle.com/retailrocket/ecommerce-dataset?select=events.csv
events_df = pd.read_csv("/home/svjack/temp_dir/events.csv")

events_df["time"] = pd.to_datetime(events_df["timestamp"]).astype(str)
events_df["time_two_pos"] = events_df["time"].map(lambda x: str(x)[:str(x).find(".") + 2] + "0" * (28 - len(str(x)[:str(x).find(".") + 2])) + "1")

event_table = pa.Table.from_pandas(events_df)

write_path = os.path.join("/home/svjack/temp_dir" ,"event_log")
### write it to local
pq.write_to_dataset(event_table, write_path, partition_cols=["event", "time_two_pos"])

### read by condition
filters_objects = [[("event", "in", ["transaction", "addtocart"]), ("time_two_pos", ">", pd.to_datetime("1970-01-01 00:24:01.200000001"))]]
filters_types = [[("event", "str"), ("time_two_pos", "timestamp[ms]")]]

p_path = write_path

Timestamp = pd.Timestamp
exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
        p_path,
        filters_objects = filters_objects,
        filters_types = filters_types)

all_filtered_df_in_upper = exp_cast_ds.read().to_pandas()

assert np.all(pd.to_datetime(all_filtered_df_in_upper["time_two_pos"]) > pd.to_datetime("1970-01-01 00:24:01.200000001"))

filters_objects = [[("event", "in", ["transaction", "addtocart"]), ("time_two_pos", "<=", pd.to_datetime("1970-01-01 00:24:01.200000001"))]]
filters_types = [[("event", "str"), ("time_two_pos", "timestamp[ms]")]]

Timestamp = pd.Timestamp
exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
        p_path,
        filters_objects = filters_objects,
        filters_types = filters_types)

all_filtered_df_in_lower = exp_cast_ds.read().to_pandas()

assert np.all(pd.to_datetime(all_filtered_df_in_lower["time_two_pos"]) <= pd.to_datetime("1970-01-01 00:24:01.200000001"))

filters_objects = [[("event", "in", ["transaction", "addtocart"]),]]
filters_types = [[("event", "str"),]]


exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
        p_path,
        filters_objects = filters_objects,
        filters_types = filters_types)

all_filtered_df_in = exp_cast_ds.read().to_pandas()

assert all_filtered_df_in.shape[0] == all_filtered_df_in_lower.shape[0] + all_filtered_df_in_upper.shape[0]

"filters_objects" can be seen as original "filters" in PyArrow, "filters_types" is the cast type that can retrieve from type_aliases in PyArrow.

Code with Demo

Roadmap

Contributing

License

Distributed under the MIT License. See LICENSE for more information.

Contact

svjack - [email protected]

Project Link: https://github.com/svjack/PyArrowExpressionCastToolkit

Acknowledgements

pyarrowexpressioncasttoolkit's People

Contributors

svjack avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.