Giter Site home page Giter Site logo

trane-dev / trane Goto Github PK

View Code? Open in Web Editor NEW
42.0 14.0 16.0 30.14 MB

An open source python library for automated prediction engineering

Home Page: https://www.trane.dev

License: MIT License

Python 99.04% Makefile 0.96%
automated-machine-learning automl data-science machine-learning prediction-engineering python automated-prediction-engineering auto-labeling autolabeling

trane's Introduction

Trane Logo

Tests Status Code Coverage PyPI Version PyPI Downloads


Trane is a software package that automatically generates problems for temporal datasets and produces labels for supervised learning. Its goal is to streamline the machine learning problem-solving process.

Install

Install Trane using pip:

python -m pip install trane

Usage

Here's a quick demonstration of Trane in action:

import trane

data, metadata = trane.load_airbnb()
problem_generator = trane.ProblemGenerator(
  metadata=metadata,
  entity_columns=["location"]
)
problems = problem_generator.generate()

for problem in problems[:5]:
    print(problem)

A few of the generated problems:

==================================================
Generated 40 total problems
--------------------------------------------------
Classification problems: 5
Regression problems: 35
==================================================
For each <location> predict if there exists a record
For each <location> predict if there exists a record with <location> equal to <str>
For each <location> predict if there exists a record with <location> not equal to <str>
For each <location> predict if there exists a record with <rating> equal to <str>
For each <location> predict if there exists a record with <rating> not equal to <str>

With Trane's LLM add-on (pip install "trane[llm]"), we can determine the relevant problems with OpenAI:

from trane.llm import analyze

instructions = "determine 5 most relevant problems about user's booking preferences. Do not include 'predict the first/last X' problems"
context = "Airbnb data listings in major cities, including information about hosts, pricing, location, and room type, along with over 5 million historical reviews."
relevant_problems = analyze(
    problems=problems,
    instructions=instructions,
    context=context,
    model="gpt-3.5-turbo-16k"
)
for problem in relevant_problems:
    print(problem)
    print(f'Reasoning: {problem.get_reasoning()}\n')

Output

For each <location> predict if there exists a record
Reasoning: This problem can help identify locations with missing data or locations that have not been booked at all.

For each <location> predict the first <location> in all related records
Reasoning: Predicting the first location in all related records can provide insights into the most frequently booked locations for each city.

For each <location> predict the first <rating> in all related records
Reasoning: Predicting the first rating in all related records can provide insights into the average satisfaction level of guests for each location.

For each <location> predict the last <location> in all related records
Reasoning: Predicting the last location in all related records can provide insights into the most recent bookings for each city.

For each <location> predict the last <rating> in all related records
Reasoning: Predicting the last rating in all related records can provide insights into the recent satisfaction level of guests for each location.

Community

Cite Trane

If you find Trane beneficial, consider citing our paper:

Ben Schreck, Kalyan Veeramachaneni. What Would a Data Scientist Ask? Automatically Formulating and Solving Predictive Problems. IEEE DSAA 2016, 440-451.

BibTeX entry:

@inproceedings{schreck2016would,
  title={What Would a Data Scientist Ask? Automatically Formulating and Solving Predictive Problems},
  author={Schreck, Benjamin and Veeramachaneni, Kalyan},
  booktitle={Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on},
  pages={440--451},
  year={2016},
  organization={IEEE}
}

trane's People

Contributors

dailab-bot avatar frances-h avatar gsheni avatar kveerama avatar leix28 avatar patrikdurdevic avatar rogertangos avatar sarapido avatar trane-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trane's Issues

Experiment Branch Improvement

  • Fix all docstrings.
    • Evaluator and FeatureToolsWrapper
    • Give a clear explanation about
  • Fix Load and Store for prediction problems
    • Used to store executable code for cutoff time, now the only option is repeating cutoff time. We should add a more structured way to store cutoff time in the JSON file.
  • Improve feature tools wrapper.
    • Currently, it only support one table which is the event-driven table used for problem generation, we should add APIs to import other tables in the database to assist prediction.
  • Add a new walk through using event-driven data.
  • Add unit testing.
  • After reviewing the code, merge it to dev.

Add ability to add custom operations (external plugin operations)

  • Need an easier way to add customize operations. Currently, external plugin operations are not allowed. The bottleneck is we need to maintain a list of operations so that we can save, load, and iterate over operations. It's not easy to add an external operation into operation list.

Generate prediction problems without Entity_ID

Add functionality to be able to generate problems in datasets without specifying entity_id. For example, in the Instacart dataset, some of the prediction problems could be:

Predict the number of <order_id> in the next 2w

operations = [AllFilterOp(None), CountAggregationOp(None)]

or

Predict the number of <order_id> where average <product_price> is greater than 100 in the next 2w

operations = [GreaterFilterOp("product_price"), AvgAggregationOp("product_price"), CountAggregationOp(None)]
GreaterFilterOp().threshold = 100

Name for operation value settings

We need to come up with an accurate and understandable name for the values that operations use to perform execution. For instance, a greater than row operation requires a value to act as a threshold. We seek to find a good name for describing the values that these operations need. Currently, we call them hyper-parameter settings, but that's not a great word because it's already used to mean a parameter of a prior distribution.

Filter op sequence

After adding #140, after order by is applied, only first and last operation make sense. Add restrictions on which ops can precede/follow which to not create other problems.

Example of duplicate problems we are generating

  • Predict the average house price sorted by sqft where location = NYC
  • Predict the average house price where location = NYC

PredictionProblemGenerator make window_size optional

For the turbofan degradation example, the prediction problem should be:

  • For each <unit_number> predict the number of records

With the current Trane, we have to specify window_size, so we can only get prediction problems like:

  • For each <unit_number> predict the number of records in next <window_size> days

Add having in prediction problem generation

In the store dataset, the target entity is orderlines.

Currently, we can generate a problem like

  • Predict the number of records with <products.price> greater than 22.99 in next 1m days
  • SELECT count(*) from orderlines inner join products where products.price > 22.99

Which basically predicts how many individual products with a price > 22.99 will be ordered in a month.

With a having operation, we would be able to generate problems like

  • Predict the number of records having average <products.price> greater than 22.99 in the next 1m days
  • SELECT count(orderlines.orderid) from orderlines inner join products group by orderlines.orderid having avg(products.price) > 22.99

Which predicts how many orders there will be such that average product price in each order is > 22.99 in a month.

Lazy Load DF rows in entity_to_data and entity_to_cutoff dicts

trane.utils.df_group_by_entity_id returns a dictionary where the entity id is the key, and the value is dataframe of rows with only that entity id.

This dict is then used by trane.utils.CutoffTimeBase.generate_cutoffs to create a similar dictionary with the entity id as a key, and a value with a tuple: (DF_OF_ROWS, entity_training_cutoff, entity_label_cutoff)

These methods are currently the most significant bottlenecks in running Trane. A significant portion of the slowdown is because trane is copying the DF_OF_ROWS into the dictionary. There's no reason for this. Trane should instead index the dataframe by entity_id and use the key to access the dataframe. Then the cutoff_dict can be a dictionary in format {entity_id_1: ( entity_training_cutoff, entity_label_cutoff), entity_id_1: ...} and won't have to hold a dataframe.

The trane.utils.df_group_by_entity_id method could then be eliminated entirely.

This will require searching through the code to refactor everything making use of the entity_to_data dict and entity_to_data_cutoff_dict. Then, there will need to be some mechanism for making sure that the DF is indexed by entity_id.

Independent NL system

Right now, trane's natural language system is a part of PredictionProblem.__str__. It may be better to have it be entirely outside of Trane, and to use descriptions of operations

Add prediction problem generation with aggregation and filter within first aggregation

Example problem:

For each “user_id” predict the number of <order_id> having average <product_price> is greater than 100 in the next 2w

The idea is that, for each user_id, we want to predict how many orders there are with minimum product price > 100. (e.g. if user_id=1 has 10 orders, but 6 out of these 10 orders have products cheaper than $100, then we want to predict 4)

csv_to_df fails to merge multiple tables.

reduce((lambda left_frame, right_frame: pd.merge(left_frame, right_frame, how = 'outer')), dataframes)

requires the input filenames in a special order to generate correct result.

Wildly different computation times for prediction problems

I have two sets of data, one is derived from the other. Both datasets have the same number of rows and columns, and are roughly the same filesize. Generating prediction problems on set A takes about a minute. Set B takes > 5 hours.

Why are there wildly different generation times? Any clues?

Set A:

PatientId AppointmentID Gender ScheduledDay AppointmentDay Age Neighbourhood Scholarship Hipertension Diabetes Alcoholism Handcap SMS_received No-show
29872499824296.0 5642903 F 2016-04-29 18:38:08+00:00 2016-04-29 00:00:00+00:00 62 JARDIM DA PENHA 0 1 0 0 0 0 No
558997776694438.0 5642503 M 2016-04-29 16:08:27+00:00 2016-04-29 00:00:00+00:00 56 JARDIM DA PENHA 0 0 0 0 0 0 No

metadata:

{"tables":[
    {"fields":[
        {"name": "PatientId", "type": "text"},
        {"name": "AppointmentID", "type": "text"},
        {"name": "Female", "type": "text"},
        {"name": "ScheduledDay", "type": "datetime"},
        {"name": "AppointmentDay", "type": "datetime"},
        {"name": "Age", "type": "number", "subtype": "integer"},
        {"name": "Neighbourhood", "type": "categorical", "subtype": "categorical"},
        {"name": "Scholarship", "type": "categorical", "subtype": "boolean"},
        {"name": "Hipertension", "type": "categorical", "subtype": "boolean"},
        {"name": "Diabetes", "type": "categorical", "subtype": "boolean"},
        {"name": "Alcoholism", "type": "categorical", "subtype": "boolean"},
        {"name": "Handcap", "type": "categorical", "subtype": "boolean"},
        {"name": "SMS_received", "type": "categorical", "subtype": "boolean"},
        {"name": "No-show", "type": "text"}
    ]
    }
]}

Set B:

PatientId AppointmentID Female ScheduledDay AppointmentDay Age Neighbourhood Scholarship Hipertension Diabetes Alcoholism Handcap SMS_received No-show
29872499824296.0 5642903 1 2016-04-29 18:38:08+00:00 2016-04-29 00:00:00+00:00 62 JARDIM DA PENHA 0 1 0 0 0 0 0.0
558997776694438.0 5642503 0 2016-04-29 16:08:27+00:00 2016-04-29 00:00:00+00:00 56 JARDIM DA PENHA 0 0 0 0 0 0 0.0

metadata:

{"tables":[
    {"fields":[
        {"name": "PatientId", "type": "text"},
        {"name": "AppointmentID", "type": "text"},
        {"name": "Gender", "type": "categorical", "subtype": "boolean"},
        {"name": "ScheduledDay", "type": "datetime"},
        {"name": "AppointmentDay", "type": "datetime"},
        {"name": "Age", "type": "number", "subtype": "integer"},
        {"name": "Neighbourhood", "type": "categorical", "subtype": "categorical"},
        {"name": "Scholarship", "type": "categorical", "subtype": "boolean"},
        {"name": "Hipertension", "type": "categorical", "subtype": "boolean"},
        {"name": "Diabetes", "type": "categorical", "subtype": "boolean"},
        {"name": "Alcoholism", "type": "categorical", "subtype": "boolean"},
        {"name": "Handcap", "type": "categorical", "subtype": "boolean"},
        {"name": "SMS_received", "type": "categorical", "subtype": "boolean"},
        {"name": "No-show", "type": "number", "subtype": "float"}
    ]
    }
]}

Add code coverage analysis from Codecov

     - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v3
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          fail_ci_if_error: true
          files: ${{ github.workspace }}/coverage.xml
          verbose: true

Investigate threshold bug with label time generated

  • When we perform the labeling we do not know which thresholds are used because the thresholds are not associated to the prediction problem.
  • So if we generate a prediction problem and we want to label it using a certain threshold we cannot

Allow for multiple filter columns in prediction problems

PredictionProblem.is_valid_prediction_problem assumes that only one filter column, and that only one filter operation exists and that it is the first operation.

utils.generate_nl_description assumes that only one filter operation exists.

These assumptions are true for generated prediction problems, but doesn't have to be for all problems.

The PredictionProblem.execute method (I think) already works without this first-filter assumption.

Add threshold function that uses entropy to maximize uncertainty

    def find_threshold_to_maximize_uncertainty(
        self,
        df,
        label_col,
        entity_col,
        max_num_unique_values=10,
        max_number_of_rows=2000,
        random_state=None,
    ):
        original_threshold = self.threshold

        unique_vals = sample_unique_values(
            df[label_col],
            max_num_unique_values,
            random_state,
        )

        # if len(df) > max_number_of_rows:
        #     df = df.sample(max_number_of_rows, random_state=random_state)

        best_entropy = 0
        best_parameter_value = 0

        # return the one that results in the most entropy (contains the most randomness)
        # more entropy means more unpredictability
        # goal of ML is to reduce uncertainty
        # so we want to output the dataframe with the most entropy
        unique_vals = set(df[label_col])
        for unique_val in unique_vals:
            self.set_parameters(threshold=unique_val)

            output_df = df.groupby(entity_col).apply(self.label_function)
            current_entropy = entropy_of_list(output_df[label_col])

            if current_entropy > best_entropy:
                best_entropy = current_entropy
                best_parameter_value = unique_val

        self.set_parameters(threshold=original_threshold)
        return best_parameter_value
def test_find_threshold_to_maximize_uncertanity(df):
    op = GreaterFilterOp("col")
    op.set_parameters(threshold=30.0)
    best_parameter_value = op.find_threshold_to_maximize_uncertainty(
        df,
        label_col="col",
        entity_col="id",
        random_state=0,
        max_num_unique_values=2,
    )
    # 10 will keep most of the values in col and maximize unpredictability
    # 10 is the lowest number
    assert best_parameter_value == 10
    assert op.threshold == 30.0

Add IDENTITY_ORDER_BY operation from paper

To generate problems like:

  • Which products will have the most sales next month?
  • What products will have the most profit next month?
  • Which customers will make the most purchases next quarter?
  • Which products will receive the most complaints next month?

We need the IDENTITY_ORDER_BY operation from the paper
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.