mfcabrera / hooqu Goto Github PK
View Code? Open in Web Editor NEWhooqu is a library built on top of Pandas-like Dataframes for defining "unit tests for data". This is a spiritual port of Apache Deequ to Python
License: Other
hooqu is a library built on top of Pandas-like Dataframes for defining "unit tests for data". This is a spiritual port of Apache Deequ to Python
License: Other
Hey Miguel,
Great work, I think this could be very useful for many people.
I have a question:
"Unit test" for me implies that this is part of a CI suite. As a dev, I make a change to ETL code and before it gets merged, my changes are tested on data using hooqu. But would it not make sense to use this for runtime checks as well? Maybe that's the intent, then it didn't become clear to me.
I could imagine this being used like this:
verification_suite = VerificationSuite().add_check(
Check(CheckLevel.ERROR, "Basic Check")
.has_size(lambda sz: sz == 5) # we expect 5 rows
.is_complete("id") # should never be None/Null
.is_complete("productName") # should never be None/Null
.has_mean("numViews", lambda mean: mean <= 10)
)
@verification_suite.check_input(lambda df, *args, **kwargs: df)
def my_fun(df, foo, bar=123):
df = ...
return df
The idea would be that at runtime, when my_fun
is called, the verification suite is performed on the input df (same idea could apply to the output df). Through the CheckLevel
, you could control if this should raise an error or just produce an error log, for example. I know this would need a bit of redesign of the API, since at the moment, it seems that VerificationSuite
needs reference to the data to be tested via add_data
(but I think this isn't necessary and could prove problematic down the line).
This way, it's less of a "unit test" and more of a runtime test for data. It would not only catch errors that stem from changes in the code, but also from changes in the data. Again, maybe that's the intent, then you could make it more explicit in the README.
Some minor comments:
dupliucatees
in READMEFirst I want to say Thanks!, this project is really amazing, please don't stop updating it. It would be nice if the result of the testing, I mean, the result of run() method had an option for returning the testing result in as pandas dataframe. Something like:
verification_suite.on_data(df_toy).add_checks(list_checks).run(as_dataframe=True)
I have seen AWS Dequee has this option, would be nice to implement it on this repo because we could save the results as a CSV file or into a database table easier.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.