Giter Site home page Giter Site logo

Comments (7)

ADBond avatar ADBond commented on June 25, 2024 2

Perhaps we could ORDER BY the table we sample from within the method, if a seed is provided?
Something roughly like (in estimate_u.py)

if seed is not None:
    table_to_sample_from = "SELECT * FROM __splink__df_concat_with_tf ORDER BY unique_id"
else:
    table_to_sample_from = "__splink__df_concat_with_tf"
...
sql = f"""
SELECT *
FROM {table_to_sample_from}
{sampling_sql_string}
"""

Realise there is a computational cost to this, but potentially not too bad, and as it is an (optional) part of training might be a reasonable trade-off to offer users

from splink.

ADBond avatar ADBond commented on June 25, 2024 2

I'm happy to give it a whirl and then revisit if it turns out to not be just a quick few lines

from splink.

RobinL avatar RobinL commented on June 25, 2024

Thanks for the report

What do you get if you run this?

from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match

import pandas as pd 
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k

from splink.duckdb.blocking_rule_library import block_on

# Simple settings dictionary will be used for exploratory analysis
settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        block_on(["first_name", "surname"]),
        block_on(["surname", "dob"]),
        block_on(["first_name", "dob"]),
        block_on(["postcode_fake", "first_name"]),
    ],
    "comparisons": [
        exact_match("first_name"),
        exact_match("surname"),
        
 
    ],
    "retain_matching_columns": True,
    "retain_intermediate_calculation_columns": True,
    "max_iterations": 10,
    "em_convergence": 0.01
}
for i in range(10):
    linker = DuckDBLinker(df, settings)

    linker.estimate_u_using_random_sampling(1e5, seed=2)
    print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])

I get 0.011881562201718104 every time

from splink.

James-Osmond avatar James-Osmond commented on June 25, 2024

Thanks for such a quick response @RobinL! That's very interesting - you're right, I get that exact value every time. However, if I use a more complex comparisons list, I no longer get consistent values with the following:

from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match
import splink.duckdb.comparison_template_library as ctl
import splink.duckdb.comparison_library as cl


linker = DuckDBLinker(df, settings)

import pandas as pd 
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k

from splink.duckdb.blocking_rule_library import block_on

# Simple settings dictionary will be used for exploratory analysis
settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        block_on(["first_name", "surname"]),
        block_on(["surname", "dob"]),
        block_on(["first_name", "dob"]),
        block_on(["postcode_fake", "first_name"]),
    ],
    "comparisons": [
        ctl.name_comparison("first_name", term_frequency_adjustments=True),
        ctl.name_comparison("surname", term_frequency_adjustments=True),
        ctl.date_comparison("dob", cast_strings_to_date=True, invalid_dates_as_null=True),
        ctl.postcode_comparison("postcode_fake"),
        cl.exact_match("birth_place", term_frequency_adjustments=True),
        cl.exact_match("occupation",  term_frequency_adjustments=True),
    ],
    "retain_matching_columns": True,
    "retain_intermediate_calculation_columns": True,
    "max_iterations": 10,
    "em_convergence": 0.01
}

for i in range(10):
    linker = DuckDBLinker(df, settings)

    linker.estimate_u_using_random_sampling(1e5, seed=2)
    print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])

from splink.

ADBond avatar ADBond commented on June 25, 2024

Haven't looked in great detail yet, but looks like between iterations the table __splink__df_concat_with_tf is not consistent between iterations. Then when we Bernoulli sample from this, because the rows are in a different order, we get a different __splink__df_concat_with_tf_sample table, which ultimately means a different u-prob.
This might be driven by the tf tables themselves which are pretty variable in order - I guess this then affects how the join ends up working

from splink.

RobinL avatar RobinL commented on June 25, 2024

Right - nice spot - so it's a consequence of tables (results) in SQL being inherently unordered (unless an ORDER BY is specified) as opposed to anything to do with the random number generator itself. That would make sense.

Whilst this would theoretically be fixable by ensuring we put an unambiguous ORDER BY in all our results, it would add too much complexity to the codebase (and be a big piece of work) so we might just need to drop support for seed altogether in backends that produce inconsistent results. @RossKen what do you think?

from splink.

RobinL avatar RobinL commented on June 25, 2024

Yes - it hadn't occurred to me the solution might be that simple - but if it is, that sounds like a sensible solution to me

from splink.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.