Giter Site home page Giter Site logo

kickscore's Introduction

kickscore

build status code coverage

kickscore is the dynamic skill rating system powering Kickoff.ai.

In short, kickscore can be used to understand & visualize the skill of players (or teams) competing in pairwise matches, and to predict outcomes of future matches. It extends the Elo rating system and TrueSkill.

evolution of NBA teams' skill over history

Getting started

To install the latest release directly from PyPI, simply type:

pip install kickscore

To get started, you might want to explore one of these notebooks:

References

kickscore's People

Contributors

lucasmaystre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kickscore's Issues

Delta updates

Is it possible to add observations to a model without having to fit the entire model again? I'd like to see how kickscore performs when it has observed all matches before a certain match for all matches in my dataset, but this is not feasible when I have to run .fit each time with 70000 matches.

timestamp() error on Windows with dates before 1970

Hi there,

I'm working through your NBA example, and I'm noticing that anywhere where we're trying to use .timestamp(), e.g.

  • t = datetime.strptime(row["date"], "%Y-%m-%d").timestamp()
  • or when setting timestamps=True in the plotting (ts = [datetime.fromtimestamp(t) for t in ts])

it throws an error when encountering dates before 1970, as below:
OSError: [Errno 22] Invalid argument

This doesn't happen in the online notebook, and from some googling it sounds like it's a Windows specific issue. I was able to work around it in the main section by simply manually calculating the timedelta and converting it to seconds (t = (datetime.strptime(row["date"], "%Y-%m-%d") - datetime(1970, 1, 1)).total_seconds()), but that doesn't help me with the plotting, unfortunately.

Do you have any suggested workarounds or fixes? Let me know if any more info is needed.

Edit: To clarify, I'm running Python 3.8 in Spyder 4.1.1 on Windows

How countdiff

Didnt find any information or examples about how to set up countdiff model.
what about win probas? how to calc it?
how to set up observations? similar to count model? or difference? or 50/50)

parameter selection

hello,

first of all, congrats for your work! I was wondering if there is a guide or code for the parameter selection?
I have a dataset of online games, with 2000 players and 5000 games, I fit the model as in the nba example but results are much inferior than elo or truskill.

greetings

Draw

Is there a way to handle drawn matches? Thank you

Algorithm consideration

Kickscore is a rating based model.
Over time a team might loose their rating but short term a longer break than the other team is considered a plus point.

How would you solve this mathematically

Model never converges when events with multiples winners are observed

I'm attempting to apply kickscore to a game where each team has 5 players, but I've found that when I try to add a single event with multiple winners and losers the model never converges. When I have a separate event for each winner winning against each loser the model does usually converge. Is there something I should be doing differently in this scenario?

How to save a trained model

Hello,

I've tried saving the trained model using pickle and sklearn joblib. Both breaks down with maximum recursion depth-error. I've tried raising the default level, but the pickle job runs in what seems forever, without terminating.

Do you have a preferred method for saving a trained model?

Thanks for an excellent library!

Produce ranking

Can a ranking be produced? The "model.probabilities" accepts only two teams as input but what if you want to produce a ranking using the games played so far?

GaussianObservation.probability always returns zero

I'm trying to use the difference model, but all the returned probabilities are (0, 1), which looking at the code

prob = GaussianObservation.probability(items, threshold, var, t=t)
return (prob, 1 - prob)

suggests that the GaussianObservation.probability is always returning zero. Any idea what's going on? I'm getting the same issue with the CountDiffModel as well.

My setup is below

model = ks.DifferenceModel()
kernel = (ks.kernel.Constant(var=0.03)
        + ks.kernel.Matern32(var=0.138, lscale=1.753*seconds_in_year))

for game in games:
   determine winner, loser, point differential, and game timestamp t
   observations.append({'items1': [winner],
                         'items2': [loser],
                         'diff': point differential,
                         't': t,
                         })
for obs in observations:
    model.observe(**obs)
    
converged = model.fit()

preds = []
for game in future_games:
   preds.append(model.probabilities([home_team], [away_team], game_timestamp)

which gives all the preds as (0,1)

plot_scores function returns 0 for all but first and last value

Hi!

I am running kickscore for some data, and when plotting the scores with plot_scores function, the predict method returns mean 0 and variance 1 for all data points except the first and last timestamps. For the final and the last timestamps the predictions are the sames as the values in the Items scores attribute. There are some other anomalies with the score data, but this seems to be far most common one.

image

Here are 3 players plotted, and they all exhibit the same behaviour.

Also, why in the plot_scores function, we calculate the ms vector with the predict method, but not rather just take the stored values from the scores attributes of an Item?

Another observation I made was that I have players who have only won matches, but their score at the end is almost the same as in the beginning? Here is a picture of a player who has won roughly 20 matches and lost 0. Shouldn't their trajectory be monotonically increasing, even if the opponents are weak? The data below is from the scores attribute of the Item

image

I am using the BinaryModel, with exponential kernel (var=1, lscale=1) and recursive fitter.

All help is much appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.