Giter Site home page Giter Site logo

google-deepmind / mathematics_dataset Goto Github PK

View Code? Open in Web Editor NEW
1.8K 67.0 240.0 105 KB

This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty.

License: Apache License 2.0

Python 100.00%

mathematics_dataset's Introduction

Mathematics Dataset

This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.

Original paper: Analysing Mathematical Reasoning Abilities of Neural Models (Saxton, Grefenstette, Hill, Kohli).

Example questions

Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.
Answer: 4

Question: Calculate -841880142.544 + 411127.
Answer: -841469015.544

Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).
Answer: 54*a - 30

Question: Let e(l) = l - 6. Is 2 a factor of both e(9) and 2?
Answer: False

Question: Let u(n) = -n**3 - n**2. Let e(c) = -2*c**3 + c. Let l(j) = -118*e(j) + 54*u(j). What is the derivative of l(a)?
Answer: 546*a**2 - 108*a - 118

Question: Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql.
Answer: 1/110

Pre-generated data

Pre-generated files

Version 1.0

This is the version released with the original paper. It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:

  • algebra (linear equations, polynomial roots, sequences)
  • arithmetic (pairwise operations and mixed expressions, surds)
  • calculus (differentiation)
  • comparison (closest numbers, pairwise comparisons, sorting)
  • measurement (conversion, working with time)
  • numbers (base conversion, remainders, common divisors and multiples, primality, place value, rounding numbers)
  • polynomials (addition, simplification, composition, evaluating, expansion)
  • probability (sampling without replacement)

Getting the source

PyPI

The easiest way to get the source is to use pip:

$ pip install mathematics_dataset

From GitHub

Alternately you can get the source by cloning the mathematics_dataset repository:

$ git clone https://github.com/deepmind/mathematics_dataset
$ pip install --upgrade mathematics_dataset/

Generating examples

Generated examples can be printed to stdout via the generate script. For example:

python -m mathematics_dataset.generate --filter=linear_1d

will generate example (question, answer) pairs for solving linear equations in one variable.

We've also included generate_to_file.py as an example of how to write the generated examples to text files. You can use this directly, or adapt it for your generation and training needs.

Dataset Metadata

The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.

property value
name Mathematics Dataset
url
sameAs https://github.com/deepmind/mathematics_dataset
description This dataset consists of mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.\n \n ## Example questions\n \n ```\n Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.\n Answer: 4\n \n Question: Calculate -841880142.544 + 411127.\n Answer: -841469015.544\n \n Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).\n Answer: 54*a - 30\n ```\n \n It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:\n \n * **algebra** (linear equations, polynomial roots, sequences)\n * **arithmetic** (pairwise operations and mixed expressions, surds)\n * **calculus** (differentiation)\n * **comparison** (closest numbers, pairwise comparisons, sorting)\n * **measurement** (conversion, working with time)\n * **numbers** (base conversion, remainders, common divisors and multiples,\n primality, place value, rounding numbers)\n * **polynomials** (addition, simplification, composition, evaluating, expansion)\n * **probability** (sampling without replacement)
provider
property value
name DeepMind
sameAs https://en.wikipedia.org/wiki/DeepMind
citation https://identifiers.org/arxiv:1904.01557

mathematics_dataset's People

Contributors

akpandeya avatar chrisgorgo avatar davidsaxton avatar javierlorenzod avatar tirkarthi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mathematics_dataset's Issues

Errors in the training set

Hello

There is an error in generating arithmetic_add_or_sub examples.
Even in the pregenerated dataset in file arithmetic_add_or_sub.txt there are errors with this template.
Examples:

What is the difference between -0.026764 and 0.1?
0.126764
What is the difference between 0.4 and 2.815?
2.415

It misses the minus sign for negative answers, for positive answers it looks fine

What is the difference between 307 and 0.11?
306.89

cannot import name 'base_solution_linear'

After cloning the repository I tried to run "python generate_to_file.py" but it generates an error:

Traceback (most recent call last):
File "generate_file.py", line 39, in
from mathematics_dataset import generate
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/generate.py", line 29, in
from mathematics_dataset.modules import modules
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/modules/modules.py", line 21, in
from mathematics_dataset.modules import algebra
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/modules/algebra.py", line 25, in
from mathematics_dataset import example
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/example.py", line 23, in
from mathematics_dataset.util import composition
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/util/composition.py", line 28, in
from mathematics_dataset.sample import polynomials
File "/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/mathematics_dataset/sample/polynomials.py", line 33, in
from sympy.solvers.diophantine import base_solution_linear as diophantine_solve_linear_2d
ImportError: cannot import name 'base_solution_linear' from 'sympy.solvers.diophantine' (/home/ucleraiserver/.conda/envs/torch/lib/python3.7/site-packages/sympy/solvers/diophantine/init.py)

Do you know how to resolve it?

How to download the whole dataset?

I think I should set the parameters per_train_module and per_test_module larger in generate.py, but what is the proper value to download the whole dataset including trainset and testset?

Where is the rest of the pre-generated data?

hi! I'm looking to reproduce the results in the paper but when downloading from this link on the README, the .tar file only contains train-medium.

The github README says Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard".

Should the train-easy, train-hard, and test-easy/medium/hard datasets be included in the pre-generated files off of GCP, or do we need to generate it ourselves? If the latter, how can we ensure that the generated data/results are the exact same as the paper?

Questions about the training settings

Hi! I am really interested in this fascinating work. However, I have some questions about the training methods for the transformer model.

In the paper you mention the transformer model is trained with learning rate = 6e-4 but do not say which lr decay method you are using, which I am curious about. I am also curious about the number of layers in the encoder and decoder.

Could you please demonstrate more specifically about the training settings? It will be more convenient for someone like me who want to reproduce your results if you could just publish your training source codes.

Thank you very much!

Question about LSTM training setting

I'm trying to reproduce the baseline performance of Attentional LSTM in the paper.

Even though I use the same training hyper-parameters as the paper, I cannot get similar performance as the baseline.
I guess the problem of my implementation is the ignored character.
The paper said 96 characters are used, including one special token.

What I'm wondering is if there's no token, should the prediction inferred by the model be also padded with ignored characters? Then the loss function shouldn't ignore tokens, and I'm worried that the ignored character gives too much impact for training.

Also, if it's possible, can you share the training code for baseline results?
It would be greatly helpful to me for reproducing the results of the paper. :)
Thanks!

public dataset filename is ambiguous

Thanks for sharing this excellent work.

When downloading the generated dataset, the filename is v1.0.tar.gz.

Consider changing it to something like mathematics-dataset-v1.0.tar.gz so it doesn't appear ambiguous in download folders?

Question about baseline results

Did you use beam search for the accuracy values reported in the paper? If so, is the accuracy top1 accuracy or top K accuracy?

Generate mathematical statements rather than questions

Is there an easy way to modify the code in this repo so that you can generate statements rather than questions?

For example, instead of generating a sample with Question "What is 4 + 4?" and Answer "8". Is it possible to generate the statement "4 + 4 = 8"?

Train and evaluate problem

Hi, there! It is really a nice work. I have a question when using the dataset. Does it refer to train a single neural network model for all kind of mathematics questions or train many neural network models individually for different type of modules in the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.