Giter Site home page Giter Site logo

gaglia88 / sparker Goto Github PK

View Code? Open in Web Editor NEW
61.0 8.0 18.0 49.58 MB

SparkER: an Entity Resolution framework for Apache Spark

License: GNU General Public License v3.0

Scala 65.52% Shell 0.03% Python 26.78% Jupyter Notebook 7.67%
apache-spark scala entity-resolution spark entity resolution apache meta-blocking python python27

sparker's Introduction

SparkER

An Entity Resolution framework developed in Scala and Python for Apache Spark.

Please note that the Scala version is not maintained anymore, only the Python one is kept updated.


If use this library, please cite:

@inproceedings{sparker,
  author    = {Luca Gagliardelli and
               Giovanni Simonini and
               Domenico Beneventano and
               Sonia Bergamaschi},
  title     = {SparkER: Scaling Entity Resolution in Spark},
  booktitle = {Advances in Database Technology - 22nd International Conference on
               Extending Database Technology, {EDBT} 2019, Lisbon, Portugal, March
               26-29, 2019},
  pages     = {602--605},
  publisher = {OpenProceedings.org},
  year      = {2019},
  doi       = {10.5441/002/edbt.2019.66}
}

News

  • 2022-05-18: we added the Generalized Supervised meta-blocking described in our new paper [6]. Here there is an example of usage.

Entity Resolution

Entity Resolution (ER) is the task of identifying different records (a.k.a. entity profiles) that pertain to the same real-world entity. Comparing all the possible pairs of records in a data set may be very inefficient (quadratic complexity), in particular in the context of Big Data, e.g., when the records to compare are hundreds of millions. To reduce this complexity, usually ER uses different blocking techniques (e.g. token blocking, n-grams, etc.) to create clusters of profiles (called blocks). The goal of this process is to reduce the global number of comparisons, because will be compared only the records that are in the same blocks.

Unfortunately, in the Big Data context the blocking techniques still produces too many comparisons to be managed in a reasonable time, to reduce more the number of comparison the meta-blocking techniques was introduced [2]. The idea is to create a graph using the information learned from the blocks: the profiles in the blocks represents the nodes of the graph, and the comparisons between them represents the edges. Then is possible to calculate some metrics on the graph and use them to pruning the less significant edges.

Meta-Blocking for Spark

SparkER implements for Spark the Meta-Blocking techniques described in Simonini et al. [1], Papadakis et al. [2], Gagliardelli et al. [6].

stages

The process is composed by different stages

  1. Profile loading: loads the data (supports csv, json and serialized formats) into entity profiles;
  2. Blocking: performs the blocking, token blocking or Loose Schema Blocking [1];
  3. Block purging: removes the biggest blocks that are, usually, stopwords or very common tokens that do not provide significant relations [4];
  4. Block filtering: for each entity profile, filters out the biggest blocks [3];
  5. Meta-blocking: performs the meta-blocking, producing as results the list of candidates pairs that could be matches.

Datasets

To test SparkER we provide a set of datasets that can be downloaded here. It is also possible to use the datasets proposed in [2].

Contacts

For any questions about SparkER write us at [email protected]

  • Luca Gagliardelli
  • Giovanni Simonini

References

[1] Simonini, G., Bergamaschi, S., & Jagadish, H. V. (2016). BLAST: a Loosely Schema-aware Meta-blocking Approach for Entity Resolution. PVLDB, 9(12), 1173–1184. link

[2] Papadakis, G., Koutrika, G., Palpanas, T., & Nejdl, W. (2014). Meta-blocking: Taking entity resolution to the next level. IEEE TKDE.

[3] Papadakis, G., Papastefanatos, G., Palpanas, T., Koubarakis, M., & Green, E. L. (2016). Scaling Entity Resolution to Large , Heterogeneous Data with Enhanced Meta-blocking, 221–232. IEEE TKDE.

[4] Papadakis, G., Ioannou, E., Niederée, C., & Fankhauser, P. (2011). Efficient entity resolution for large heterogeneous information spaces. Proceedings of the Fourth ACM International Conference on Web Search and Data Mining - WSDM ’11, 535.

[5] Gagliardelli, L., Zhu, S., Simonini, G., & Bergamaschi, S. (2018). Bigdedup: a Big Data integration toolkit for duplicate detection in industrial scenarios. In 25th International Conference on Transdisciplinary Engineering (TE2018) (Vol. 7, pp. 1015-1023). link

[6] Gagliardelli, L., Papadakis, G., Simonini, G., Bergamaschi, S., & Palpanas, T. (2022). Generalized Supervised Meta-Blocking. In PVLDB. link

sparker's People

Contributors

gaglia88 avatar stravanni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparker's Issues

Concern with weight calculation using BLAST and entropies

This library is pretty incredible, just have a bit of a concern I wanted to report.

My use case is as follows:

Take 2 CSV's containing customer data that should contain 1 or more fields that are matchable (an identifier for example)

customers1.csv:

id name random_field_1 random_field_2 random_field_3 etc...
1 google 555-333-222 ... ... ...
2 facebook 222-555-111 ... ... ...
3 microsoft 333-111-888 ... ... ...

customers2.csv:

identifier customer_name random_field_1 random_field_2 random_field_3 etc...
5 google inc 555 ... ... ...
10 facebook corp 111 ... ... ...
300 microsoft industries 555 ... ... ...
  1. create profiles
  2. cluster_similar_attributes
[
    {'cluster_id': 1, 'keys': ['1_name', '2_customer_name'] 'entropy': 1.4},
    {'cluster_id': 2, 'keys': ['1_id', '2_id', '1_random_field_1', '2_random_field_1', '1_random_field_2', '2_random_field_2', (etc...)], 'entropy': 9.5}, 
]
  1. create_block_clusters
[
{'block_id': 0, 'profiles': [{0}, {0}], 'entropy': 1.4, 'cluster_id': -1, 'blocking_key': ''}
{'block_id': 1, 'profiles': [{1,2}, {1}], 'entropy': 9.5, 'cluster_id': -1, 'blocking_key': ''}
], 
  1. block purging
  2. block filtering
  3. WNP
    I would get a few mis-matches because the weight of matches for cluster_id 2 would be greater than cluster_id 1.
    Assuming there's 100 rows in each profile and 100 being the separator id (200 profiles total) the output edges would look something like:
[[0, 100, 10.5]
[1, 101, 10.5]
[2, 101, 20.8]]

You will notice that the higher weight goes to the match that has the higher entropy.
This doesn't seem correct to me since lower entropy should give higher weight.

Using standard library I was able to get around 80-90 perfect matches. Once I edited calc_weights function in common_node_pruning.py from calc_chi_square(...) * entropies[neighbor_id] to calc_chi_square(...) / entropies[neighbor_id] I was able to get 100 perfect 1to1 matches.

Does the division instead of multiplication here make sense, and is my assumption of lower entropy should be greater match correct?

Please let me know :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.