Giter Site home page Giter Site logo

Comments (4)

lucasmaystre avatar lucasmaystre commented on June 17, 2024

Hi @markustoivonen

It's hard to say with certainty without looking at the data & code, but here are a few thoughts that might help you:

  1. in general, you might want to combine a dynamic kernel (e.g. Exponential) with a static one (Constant). The constant kernel will capture a player's baseline score, whereas the dynamic one will capture fluctuations around the baseline over time. If you don't, then the score will revert to zero during long stretches without observations.
  2. Regarding your first plot: the timescale (x-axis) appears very large (1e9), but the timescale of the exponential kernel is 1 (lscale=1). This is why the score is dropping very quickly to zero (the prior mean) outside of the precise moment the comparisons happened. lscale basically tells the model how far away in time should scores be correlated -> here, given the timescale of your data, the model thinks the score at any two time points should essentially be uncorrelated (hence be zero most of the time).
  3. Regarding your 2nd plot: kickscore works differently than Elo, etc, where players gain / lose rating points after every game. kickscore infers the score over time given the entire history of matches. This means that even if a player wins all the time, their score might fluctuate up & down, depending on 1) the strength of their opponents, 2) the interval between successive matches and 3) prior beliefs about the score & its evolution over time (given by the kernel).

Bottom line:

  • try kernel = Exponential(...) + Constant(...), the model will then learn a baseline score for each player. Over long stretches without games the score will revert to this baseline (instead of zero).
  • play with the lenghtscale of the exponential kernel. In particular, try increasing it to the equivalent of ~1 year (what the corresponding number will be depends on your unit of time).

Regarding your question

Also, why in the plot_scores function, we calculate the ms vector with the predict method, but not rather just take the stored values from the scores attributes of an Item?

The scores attribute contains the mean & variance of the score only at times where the player played a game (you can check this by inspecting ts). When plotting it looks better if you show the score time-series using regularly spaced time intervals, not necessarily matching the timestamps of games a user played, hence the call to predict.

Hope this helps!

from kickscore.

markustoivonen avatar markustoivonen commented on June 17, 2024

Hi @lucasmaystre! Thank you for taking the time to give such an thorough answer, I truly appreciate it. :)

I was able to fit the predict curve to the data points, so now the plotting curves makes more sense.

I have a few more follow up questions, hopefully they are not too strenuous.

  1. Here is an image of a players kickscore (variance omitted).

image

The player has only wins except for one loss around December. What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss? One can see that the score stagnates and decreases already before the loss happens. This sort of behaviour is unlikely in my context and I would like to address that.

To put simply, in my context I have two types of players: a) exercises and b) users

Users complete exercises and either win / lose against them, but users or exercises never compete against the same type (user vs user, exercise vs exercise). So then the kickscore represents skill of players and difficulty of the exercise.

I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?

  1. In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection? I.e. if Affine + Wiener is the best combo, do you go through all the different possible combinations of kernels (say, limit it to kernel1 + kernel2) and go through different hyperparameters for all the kernel combinations?

Thank you very much!

from kickscore.

lucasmaystre avatar lucasmaystre commented on June 17, 2024

Hi @markustoivonen sorry for the delay.

What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss.

I don't think there is such a kernel, unfortunately. If you know where the score should change (a priori), you can try to use the PiecewiseConstant kernel (which allows for discontinuous jumps).

In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection?

That's a great question, and there is no simple answer. In practice, it's an art as much as it is a science—you try combinations that intuitively make sense. Here's a paper that attempts to make this process more rigorous: https://arxiv.org/pdf/1302.4922.pdf but it's still mostly a heuristic search.

In practice, on the datasets I've played with, constant + exponential (or wiener) gives you 95-99% of the performance you get with more "fancy" combinations.

I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?

Yes, agreed with you - exercises don't change so a static kernel makes sense. For players, I could imagine it would make sense to assume the skill is monotonic (i.e., can only increase over time). Unfortunately that's not implemented in kickscore at the moment (but http://proceedings.mlr.press/v9/riihimaki10a/riihimaki10a.pdf could provide a blueprint).

Overall, I think a simple Wiener kernel (which has a constant offset built-in through var_t0) would be a good starting point for the players.

from kickscore.

amirbachar avatar amirbachar commented on June 17, 2024

Hi @lucasmaystre! Thank you for taking the time to give such an thorough answer, I truly appreciate it. :)

I was able to fit the predict curve to the data points, so now the plotting curves makes more sense.

I have a few more follow up questions, hopefully they are not too strenuous.

  1. Here is an image of a players kickscore (variance omitted).

image

The player has only wins except for one loss around December. What kind of kernel/hyper params in your opinion would best work for a situation, where the predicted kickscore does not reduce until after the last win before the loss? One can see that the score stagnates and decreases already before the loss happens. This sort of behaviour is unlikely in my context and I would like to address that.

To put simply, in my context I have two types of players: a) exercises and b) users

Users complete exercises and either win / lose against them, but users or exercises never compete against the same type (user vs user, exercise vs exercise). So then the kickscore represents skill of players and difficulty of the exercise.

I am thinking that separate kernels for both types of users is most likely advisable, considering that the exercises are actually static and don't change, where as the player learns (and unlearns after not completing)? Also, the exposure for different players are different. An exercise faces thousands of users a week, where as an user completes an exercise roughly once a week. Do you have any tips/thoughts how one would best tackle this problem?

  1. In Table 3 of your original paper you list the best combination you found for each dataset. Considering that the hyperparameters correspond to certain kernels, how should one randomize the kernel combination selection? I.e. if Affine + Wiener is the best combo, do you go through all the different possible combinations of kernels (say, limit it to kernel1 + kernel2) and go through different hyperparameters for all the kernel combinations?

Thank you very much!

The model assumes an unbiased random shift of abilities, so your assumption about the ability does not fall into that category.
A simple model that would fit you criteria is actually a simple Elo, or if you want a more sophisticated one, Glicko2 or TrueSkill are good candidates. The graph would not be smooth (you actually only get point estimations at the time of doing the exercises), so keep that in mind.

from kickscore.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.