Giter Site home page Giter Site logo

Comments (4)

Ledenel avatar Ledenel commented on June 17, 2024

A naive implementation:
count all game filtered by above condition as filtered_game
then, count game in this situation which player actually get 1,2,3,4 rank, like filtered_rank_1, ...

then output filtered_rank_1 / filtered_game as top rate, remains are same.

from auto-white-reimu.

Ledenel avatar Ledenel commented on June 17, 2024

To measure how confident we estimate possibility by filtered_rank_k / filtered_game frequency, total count filtered_game is also needed to show.

When the filtered_game is to small, means these estimation is untrusted, we should loose the constraint to make more games taken account, which means a fallback.

Here's some possible solution:

  • try delete filter seat order, is oya in this order.
  • add binning for current score, like Bayesian binning, binning by possible score changes.
  • using Decision Trees or Random Forest to find some proper score/game round cut point (given these situations, predict player ranking 1,2,3,4 is a 4-class classification problem).
  • more Machine Learning algorithm aimed to solve multi-class classification problem

Some data structure support fast fallback is also needed.

from auto-white-reimu.

canuse avatar canuse commented on June 17, 2024

Using SVM as baseline of classification.

As you well know, SVM(SVC) is a widely used classifier that is robust and interpretable. Thus it might be a possible baseline of this question.

I've trained 8 models with different train_set_size(50000, 100000), different multi-classifier (OveVSOne and OneVsRest) and different kernel(rbf, linear) and tested on five test sets, each contains 300000 records. All sets are picked randomly and uniquely from tenhou records, which have 57755880 items. 19 characters are used, which are ['player_pos', 'game_round', 'game_sub_round', 'is_oya', 'self_score', 'score0', 'score1', 'score2', 'score3', 'your_rate', 'rate1', 'rate2', 'rate3', 'rate4', 'your_dan', 'dan1', 'dan2', 'dan3', 'dan4'] and the target is final position (1, 2 , 3 or 4).

Result are listed below, accurate means that predict == truth, delta means that abs(predict - truth) == 1 and wrong stands for other conditions. It shows that RBF kernel is more fast and precise than the linear kernel, with a higher accurate_rate and a lower wrong_rate, though the accuracy of all configs is below 45%. And, larger train_set slightly increases the performance. Considering the time consumption and performance, RBF kernel with OVO and 100000 train_set_size could be a not so bad baseline.

train_set_size kernel multi_classifier calculate_time accurate_rate (avg) delta_rate (avg) wrong_rate (avg)
50000 linear OneVSOne ~1h 43.702 38.152 18.2
OneVsRest ~1h 43.674 37.768 18.556
rbf OneVSOne ~30min 43.532 41.654 14.812
OneVsRest ~20min 43.532 41.654 14.812
100000 linear OneVSOne ~2.5h 44.024 37.82 18.156
OneVsRest ~2.5h 44.024 37.82 18.156
rbf OneVSOne ~1h 44.028 41.086 14.886
OneVsRest ~1h 44.028 41.086 14.886

And the raw data:

used line:
'player_pos', 'game_round', 'game_sub_round', 'is_oya','self_score', 'score0', 'score1', 'score2', 'score3', 'your_rate','rate1', 'rate2', 'rate3', 'rate4', 'your_dan', 'dan1', 'dan2', 'dan3','dan4'

ovr,linear,train_set_size=50000,test_set_size=30000
Train set: accurate 43.36%, delta=1 37.63%, wrong 19.01%
Test set1: accurate 43.64%, delta=1 37.80%, wrong 18.56%
Test set2: accurate 43.81%, delta=1 37.72%, wrong 18.47%
Test set3: accurate 43.66%, delta=1 37.84%, wrong 18.49%
Test set4: accurate 43.70%, delta=1 37.75%, wrong 18.55%
Test set5: accurate 43.56%, delta=1 37.73%, wrong 18.71%

ovr,rbf,train_set_size=50000,test_set_size=30000
Train set: accurate 45.22%, delta=1 40.85%, wrong 13.93%
Test set1: accurate 43.56%, delta=1 41.66%, wrong 14.78%
Test set2: accurate 43.57%, delta=1 41.66%, wrong 14.77%
Test set3: accurate 43.61%, delta=1 41.61%, wrong 14.78%
Test set4: accurate 43.50%, delta=1 41.64%, wrong 14.85%
Test set5: accurate 43.42%, delta=1 41.70%, wrong 14.88%

ovo,linear,train_set_size=50000,test_set_size=30000
Train set: accurate 45.36%, delta=1 37.45%, wrong 17.2%
Test set1: accurate 43.65%, delta=1 38.14%, wrong 18.21%
Test set2: accurate 43.76%, delta=1 38.09%, wrong 18.15%
Test set3: accurate 43.66%, delta=1 38.14%, wrong 18.21%
Test set4: accurate 43.78%, delta=1 38.04%, wrong 18.18%
Test set5: accurate 43.66%, delta=1 38.09%, wrong 18.25%

ovo,rbf,train_set_size=50000,test_set_size=30000
Train set: accurate 45.23%, delta=1 40.83%, wrong 13.94%
Test set1: accurate 43.56%, delta=1 41.66%, wrong 14.78%
Test set2: accurate 43.57%, delta=1 41.66%, wrong 14.77%
Test set3: accurate 43.60%, delta=1 41.61%, wrong 14.78%
Test set4: accurate 43.50%, delta=1 41.65%, wrong 14.85%
Test set5: accurate 43.43%, delta=1 41.69%, wrong 14.88%


100000_ovo_svm_linear
Test set1: accurate 43.61%, delta=1 38.02%, wrong 18.37%
Test set2: accurate 43.79%, delta=1 37.90%, wrong 18.31%
Test set3: accurate 43.64%, delta=1 38.01%, wrong 18.35%
Test set4: accurate 43.71%, delta=1 37.91%, wrong 18.38%
Test set5: accurate 43.66%, delta=1 37.87%, wrong 18.47%
Train set: accurate 45.32%, delta=1 37.41%, wrong 17.27%

100000_ovr_svm_linear
Test set1: accurate 43.61%, delta=1 38.02%, wrong 18.37%
Test set2: accurate 43.79%, delta=1 37.9%, wrong 18.31%
Test set3: accurate 43.64%, delta=1 38.01%, wrong 18.35%
Test set4: accurate 43.71%, delta=1 37.91%, wrong 18.38%
Test set5: accurate 43.66%, delta=1 37.87%, wrong 18.47%
Train set: accurate 45.32%, delta=1 37.41%, wrong 17.27%

100000_ovo_svm_rbf
Test set1: accurate 43.76%, delta=1 41.25%, wrong 14.99%
Test set2: accurate 43.7%, delta=1 41.33%, wrong 14.97%
Test set3: accurate 43.78%, delta=1 41.21%, wrong 15.01%
Test set4: accurate 43.67%, delta=1 41.25%, wrong 15.08%
Test set5: accurate 43.61%, delta=1 41.27%, wrong 15.12%
Train set: accurate 45.38%, delta=1 40.37%, wrong 14.25%

100000_ovr_svm_rbf
Test set1: accurate 43.76%, delta=1 41.25%, wrong 14.99%
Test set2: accurate 43.7%, delta=1 41.33%, wrong 14.97%
Test set3: accurate 43.78%, delta=1 41.21%, wrong 15.01%
Test set4: accurate 43.67%, delta=1 41.25%, wrong 15.08%
Test set5: accurate 43.61%, delta=1 41.27%, wrong 15.12%
Train set: accurate 45.38%, delta=1 40.37%, wrong 14.25%

Since SVM is not so suitable for multi-classification and can not give the probability directly, I then tried RVM, which uses Bayesian Inference that can solve these problems. However, the train time and resource requirements of RVM is much more larger. Thus I only able to train one with the train_set_size of 5000 and the accuracy is only 25%.

from auto-white-reimu.

Ledenel avatar Ledenel commented on June 17, 2024

In my opinion decision tree-based algorithms may be the most basic reliable solution in this situation. It could give precise frequency and confidence by counts on leafs, and supports multi-class classification naturally.

Consider only the wrong rate, SVM-based algorithms is also reasonable. For possibilities inference of a trained model, we could directly make a work around by given the raw output of the decision function (rather than signed output), assume output forms a Gussian distribution (using sample variance S for appxorimation), normalized to standard Gussian distribution, then using the cumulative distribution function to perform a possibility of being the negative class.

By the way, since player rank (1,2,3,4) has a strongly level meaning, it's also possible to use some Regression method, which may be more interpretable directly.

from auto-white-reimu.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.