Giter Site home page Giter Site logo

highdiceroller / icepool Goto Github PK

View Code? Open in Web Editor NEW
42.0 2.0 4.0 84.83 MB

Python dice probability package.

License: MIT License

HTML 22.37% JavaScript 0.36% CSS 0.20% Python 77.07%
combinatorics dice jupyterlite probability python python3 rpg dice-probabilities order-statistics

icepool's Introduction

Icepool

Python dice probability package.

GitHub repository.

PyPi page.

Try coding in your browser using Icecup, a simple frontend for scripting and graphing similar to AnyDice, SnakeEyes, and Troll. You can find a series of tutorials here.

Features

  • Pure Python implementation using only the Standard Library. Run it almost anywhere Python runs: program locally, share Jupyter notebooks, or build your own client-side web apps using Pyodide.
  • Dice support all standard operators (+, -, <, >, etc.) as well as an extensive library of functions (rerolling, exploding, etc.)
  • Efficient dice pool algorithm can solve keep-highest, finding sets and/or straights, RISK-like mechanics, and more in milliseconds, even for large pools.
  • Exact fractional probabilities using Python ints.
  • Some support for decks (aka sampling without replacement).

Installing

pip install icepool

The source is pure Python, so including a direct copy in your project can work as well.

Contact

Feel free to open a discussion or issue on GitHub. You can also find me on Reddit or Twitter.

API documentation

pdoc on GitHub.

JupyterLite notebooks

See this JupyterLite distribution for a collection of interactive, editable examples. These include mechanics from published games, StackExchange, Reddit, and academic papers.

JupyterLite REPL.

Tutorial notebooks

In particular, here is a series of tutorial notebooks.

Web applications

These are all client-side, powered by Pyodide. Perhaps you will find inspiration for your own application.

Paper on algorithm

Presented at Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2022.

In the official proceedings.

Preprint in this repository.

BibTeX:

@inproceedings{liu2022icepool,
    title={Icepool: Efficient Computation of Dice Pool Probabilities},
    author={Albert Julius Liu},
    booktitle={Eighteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment},
    volume={18},
    number={1},
    pages={258-265},
    year={2022},
    month={Oct.},
    eventdate={2022-10-24/2022-10-28},
    venue={Pomona, California},
    url={https://ojs.aaai.org/index.php/AIIDE/article/view/21971},
    doi={10.1609/aiide.v18i1.21971}
}

Versioning

Frankly, backwards compatability is not a high priority. If you need specific behavior, I recommend version pinning. I believe the Die interface to be reasonably stable, but there's a good chance that Multiset* will see more changes in the future. Typing is especially unstable.

Inline examples

Summing ability scores

What's the chance that the sum of player A's six ability scores is greater than or equal to the sum of player B's six ability scores? (Using 4d6 keep highest 3 for each ability score.)

import icepool

single_ability = icepool.d6.highest(4, 3)

# The @ operator means: compute the left side, and then roll the right side that many times and sum.
print(6 @ single_ability >= 6 @ single_ability)

Die with denominator 22452257707354557240087211123792674816

Outcome Probability
False 47.984490%
True 52.015510%

All matching sets

Blog post.

Question on Reddit.

Another question on Reddit.

Question on StackExchange.

Roll a bunch of dice, and find all matching sets (pairs, triples, etc.)

We could manually enumerate every case as per the blog post. However, this is prone to error. Fortunately, Icepool can do this simply and reasonably efficiently with no explicit combinatorics on the user's part.

import icepool

class AllMatchingSets(icepool.MultisetEvaluator):
    def next_state(self, state, outcome, count):
        """next_state computes a "running total"
        given one outcome at a time and how many dice rolled that outcome.
        """
        if state is None:
            state = ()
        # If at least a pair, append the size of the matching set.
        if count >= 2:
            state += (count,)
        # Prioritize larger sets.
        return tuple(sorted(state, reverse=True))

all_matching_sets = AllMatchingSets()

# Evaluate on 10d10.
print(all_matching_sets(icepool.d10.pool(10)))

Die with denominator 10000000000

Outcome Quantity Probability
() 3628800 0.036288%
(2,) 163296000 1.632960%
(2, 2) 1143072000 11.430720%
(2, 2, 2) 1905120000 19.051200%
(2, 2, 2, 2) 714420000 7.144200%
(2, 2, 2, 2, 2) 28576800 0.285768%
(3,) 217728000 2.177280%
(3, 2) 1524096000 15.240960%
(3, 2, 2) 1905120000 19.051200%
(3, 2, 2, 2) 381024000 3.810240%
(3, 3) 317520000 3.175200%
(3, 3, 2) 381024000 3.810240%
(3, 3, 2, 2) 31752000 0.317520%
(3, 3, 3) 14112000 0.141120%
(4,) 127008000 1.270080%
(4, 2) 476280000 4.762800%
(4, 2, 2) 285768000 2.857680%
(4, 2, 2, 2) 15876000 0.158760%
(4, 3) 127008000 1.270080%
(4, 3, 2) 63504000 0.635040%
(4, 3, 3) 1512000 0.015120%
(4, 4) 7938000 0.079380%
(4, 4, 2) 1134000 0.011340%
(5,) 38102400 0.381024%
(5, 2) 76204800 0.762048%
(5, 2, 2) 19051200 0.190512%
(5, 3) 12700800 0.127008%
(5, 3, 2) 1814400 0.018144%
(5, 4) 907200 0.009072%
(5, 5) 11340 0.000113%
(6,) 6350400 0.063504%
(6, 2) 6350400 0.063504%
(6, 2, 2) 453600 0.004536%
(6, 3) 604800 0.006048%
(6, 4) 18900 0.000189%
(7,) 604800 0.006048%
(7, 2) 259200 0.002592%
(7, 3) 10800 0.000108%
(8,) 32400 0.000324%
(8, 2) 4050 0.000041%
(9,) 900 0.000009%
(10,) 10 0.000000%

Similar projects

In roughly chronological order:

Troll by Torben Ægidius Mogensen

http://hjemmesider.diku.dk/~torbenm/Troll/

The oldest general-purpose dice probability calculator I know of. It has an accompanying peer-reviewed paper.

AnyDice by Jasper Flick

https://anydice.com/

Probably the most popular dice probability calculator in existence, and with good reason---its accessibility and shareability remains unparalleled. I still use it often for prototyping and as a second opinion.

SnakeEyes by Noé Falzon

https://snake-eyes.io/

SnakeEyes demonstrated the viability of browser-based, client-side dice calculation, as well as introducing me to Chart.js.

dice_roll.py by Ilmari Karonen

https://gist.github.com/vyznev/8f5e62c91ce4d8ca7841974c87271e2f

This demonstrated the trick of iterating "vertically" over the outcomes of dice in a dice pool, rather than "horizontally" through the dice---one of the insights into creating a much faster dice pool algorithm.

dyce by Matt Bogosian

https://github.com/posita/dyce

Another Python dice probability package. I've benefited greatly from exchanging our experiences.

icepool's People

Contributors

ajul avatar alexrecarey avatar casey avatar highdiceroller avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

icepool's Issues

Think more about die expansion when constructing dice

We usually want dice to be expanded in the end, since this is as explicit as possible. However, there are potential arguments for not always expanding dice:

  • A user might really want to treat the die itself as a potential outcome.
  • Delaying die expansion could be more efficient in some cases?

Counterarguments:

  • This would require making dice sortable, which requires the rest of the comparators to produce a truth value, which is a potential source of confusion I'd like to keep to a minimum. Or we could drop the requirement that outcomes be sortable, but I think this is too valuable a convention to give up.
  • Expanded vs. non-expanded is one more thing for the user to keep track of and specify.

Consider bundling `icepool` with the JupyterLite distribution

This would save the annoying

import piplite
await piplite.install("icepool")

at the beginning of each notebook.

Potential issues:

  • According to the example JupyterLite config, a static version is fetched, which would require manual updating.
    • Unless the URL was constructed dynamically?
    • Or perhaps we could include a static build?
  • The documentation makes it sound like this wouldn't actually obviate the lines above? Also, the example notebooks still use piplite to import modules.

So it looks like the main reason to do this is if we want the distribution to use icepool built from HEAD rather than the latest PyPi version.

`EvalPool` outcome alignment

In 0.13.0, I dropped the guarantee that next_state sees outcomes in consistent and consecutive order. After trying out 0.13.0, losing the consecutiveness guarantee is actually pretty annoying in some cases. We want to allow single outcomes, but this currently doesn't allow for consecutiveness, e.g. [1, 6] would skip straight from 1 to 6.

Option: attach consecutiveness to pools

There are several problems with this:

  • Increasing the size of the pool definition is likely to be costly.
  • Fundamentally it's the evaluation that cares about consecutiveness or not.

Option: Consecutiveness by EvalPool editing pools

This would have to entail using align. However, the problem is that we then get a bunch of extra 0-weight paths.

Option: Dice binding

We had this at one point.

Pros:

  • Potentially simpler syntax at point-of-use.

Cons:

  • Introduces another call interface with a different legal set of arguments. This is especially unwelcome now that PoolRoll was taken over by just making a set of one-outcome dice.
  • Internally it would probably end up being a form of alignment anyways.

Option: Extra alignment method for EvalPool

Seems the best option so far.

Pros:

  • EvalPool seems to be the right owner, and method override is an established pattern.
  • Flexible.

Cons:

  • Another abstract method for the API.
  • The alignment would probably have to be added to the cache key.

Questions:

  • What to name this? alignment? outcome_alignment?

Let's just go with alignment.

  • Should we implement a default alignment?

There seems to be an unavoidable tradeoff here.

  • If we make default consecutive for ints, there may be a performance surprise if someone decides to insert an outcome of 1 million. Also I think checking ints and computing range is kind of complicated for a default behavior.
  • On the other hand, not having default consecutive may be surprising when some outcomes are skipped.
    • If we go with this, we would provide the consecutive as a built-in option, probably something like alignment = EvalPool.range.
    • I went with this one.
  • I thought about putting a threshold for the default, but this seems even worse for the principle of least surprise.
  • What happens if pools have outcomes not in alignment? Silently take the union? Warning? Error?

This shouldn't be a problem in the consecutive int case. I'm inclined to leave it up to the subclass implementer.

  • Merge direction and alignment?

This would entail iterating through the supplied alignment in the supplied order. Unless we decide to allow non-monotonic ordering, probably not. I think this unlikely to happen since it's quite a lot more complexity for not much benefit, so I'm going to say no.

  • Remove *pools argument from direction?

This would probably allow dropping direction from the cache key: if a direction is given, it's always the same; if no direction is given, it shouldn't matter.

Though I don't expect the performance difference to be more than marginal.

Decision: I'm going to keep it since final_outcome and alignment are also supplied with *pools.

Consider dual-purpose equality

We could attach an explicit truth value to dice and have == and != set the truth value of the result based on whether the dice are identical. This would allow dice to be hashed.

Try to reclaim performance

While 0.13.0 has a simpler interface than pre 0.13 the performance is much worse on some cases. I need to figure out what's going on and try to reclaim some of that performance.

  • Caching?
  • More expensive object startup?
  • Core algorithm inefficiencies?

Constructing VectorDie with die args

Problem

Currently e.g.

icepool.Die((icepool.d6, False))

will fail, because Icepool attempts to create a single outcome, which fails because the die is not hashable.

Options

Force the user to create a die

Something like

icepool.Die(cartesian_product(icepool.d6, False))

However, this is a bit verbose and hard to remember.

Automatically expand dice in tuples

In this option,

icepool.Die((icepool.d6, False))

would do the same as above.

Potential shortcomings: It doesn't seem feasible to do this for all possible user-supplied containers. We could try only doing this for tuple; while it does make a special case, we already treat tuples specially in that they become VectorDie by default.

Comparator handling

The story

The original problem:

  • == is needed to store dice in hashed containers.
  • However, we also want the == syntax for describing the chance that two dice roll equal to each other.

At first I made an extra equals() method to do the latter. However, the pull of the simple == was strong, so I decided to make the == (and !=) operators dual-purpose: the result would act as both a die, and as a truth value. This seemed to work OK (apart from a little footgun potential).

However, since I implemented == using Die.binary_op, this is a $n^2$ algorithm, which could be pretty expensive when making lots of requests into a hashed container. My first solution to this was to lazily evaluate the die part of the result in this case; if the caller is only interested in the truth value, then the die part never has to be evaluated.

There are still some decisions to make:

Decisions

  • Take advantage of total ordering?

Currently outcomes are expected to be totally ordered, and the outcomes of dice are stored in sorted order. We could take advantage of this to perform comparators in a linear number of operations.

  • What to do with tuples?

This may get tricky with tuples. Currently the comparator is done element-wise. We could fall back to the $n^2$ algorithm in this case, I don't know that I would want to spend much time optimizing this since it is pretty niche.

Decision: Keep as-is.

  • Do we want to implement truth values for <, <=, >=, >?

Alternatively, we could declare that comparators act on tuples as a whole rather than elementwise, though this would be an unfortunate inconsistency with the rest of the operators and probably even less useful overall. This would also allow dice to be sorted with each other though I'm not sure how valuable that is either.

Decision: I don't like having them operate differently than other operators..

  • Do we keep the lazy evaluation?

Decision: Yes, due to the possibility of an expensive evaluation.

  • If we keep lazy evaluation, should we enforce that a particular instance is only ever used as a truth value NAND as a die? As an error or a warning?

On one hand, someone using both on the same die object is probably making a mistake, and this allows for absolutely disgusting constructs like

d6 == d8 or d6 == d6  # a truthy 6 / 36 coin

On the other hand, in the case where the user actually does want to reuse both, there's currently no way for the user to make a known fresh copy, which may be hard ask anyways. This "Schrodinger's type" also strikes me as less intuitive than even the dual typing.

Decision: Allow both. I decided against warning since both may be used in the case of storing a die in a hashed and/or sorted container. Actually the container makes a separate comparison so warning seems probably fine.

Revisit the vector issue

Option: use a vector outcome class

Potential issues

  • Comparators would either have to use non-vectorized tuple comparisons, or store a separate truth value similar to dice.
  • Binary operator implicit conversion: should we try to match types, or make the user be explicit?
  • We would need some way to handle die expansion in constructors. Use something like a expand_die_args method?

Whether and how to default tuples to vector behavior

Arguments for:

  • Vector behavior is more useful.
  • In fact, tuple behavior is often actively bad. For example, 20 @ Die((0,), (1,)) produces the 2^20 binary strings of length 20.

Arguments against:

  • Follows Python less closely.
  • What happens when we start nesting?

Option: treat (most) operators on tuples as vector

This works similar to how the old system did with the automatically ndim, though the old system only treated the top level as vector.

The actual tuple operators would be shadowed, but could still be used via sub.

[] would be excluded. @ has special meaning.

TODO

  • Add concatenation method?

Rework pool construction

Arguments would be a sequence of dice, or convertible-to-dices. If truncation is available, we would produce the efficient class as the current Pool, otherwise, we would produce a less-efficient class that treats each die separately. The current PoolRoll would just be a collection of single-outcome dice.

  • Debug descending direction.
  • Re-implement direction selection.
  • Re-optimize popping.
  • Finalize a class name.
  • Try to merge the two directions of code.

I considered making pool objects hidden, but this doesn't play well with count_dice.

`d` handling of right-side argument

Currently we use the same interpretation as AnyDice, where 1d(1d10) is the same as 1d10. However, other interpretations of the d operator exist:

  • Foundry VTT interprets this as rolling a die that has between 1 and 10 faces, i.e. 1 is the most likely result and 10 is the least likely.
  • Roll20 parentheses don't seem to interact with their d operator. However, using the "inlining" double brackets has the same effect as Foundry VTT, e.g. [[ 1d[[1d10]] ]]
  • AFAICT Fantasy Grounds parentheses don't work with their d operator. Extensions can do anything but I didn't see any consensus in a brief look.

I decided to:

  • Make the free function d() an alias for standard().
  • Remove the die method version of d().
  • Allow @ to cast the right side to a Die.

This seems more consistent in the Python context, even if it doesn't match AnyDice.

TODO:

  • Allow random number of faces to standard(). Too niche compared to the potential of masking bugs.

Consider dumping VectorDie

This whole ndim business has led to many thorny issues (#16, #17, #18) and complicates the API in general. Any desired functionality could be provided using a custom outcome class.

0.10 bugs from notebooks

Needs verification:

  • all_matching_sets: Needs verification.
  • risk_attrition: Needs verification.
  • odds_of_rolling_all_the_time: Needs vector ops or major rewrite.
  • isaksen2016: Would be better with vector ops.

PoolRoll skipping ints is annoying

Problem

There are some competing concerns when designing the signature of PoolRoll. If a straight-finding EvalPool relies on consecutive outcome order, then we'll get false positives. For example,

PoolRoll(1, 2, 4, 5)

will appear to be a straight of length 4 because only outcomes actually appearing in the roll are evaluated, i.e. the evaluation skips from 2 to 4.

Options

First attempt: die parameter

My first attempt was to add a die parameter to specify the outcome set. For example,

PoolRoll(1, 2, 4, 5, die=icepool.d6)

would fill in the missing outcomes of 3 and 6 with zero counts. Unfortunately, this complicates the syntax for the common case, and if the user forgets to do this, they may experience an unpleasant surprise.

Treat ints specially

We could special case and iterate ints one-by-one. However, this has a few problems: having a special case is ugly, and if e.g. the two outcomes are 1 and a million, this will result in a large amount of extra work. So this seems to be a no-go.

Rely on EvalPool.bind_dice()

This would remove the requirement to specify die at every call. However, this is still an extra step, and doesn't work well with right-truncation.

Drop the guarantee that outcomes will be seen in consecutive order

This would e.g. force straight-finding to explicitly store the last seen positive-count outcome. There are some losses here:

  • next_state may be more complicated.
  • The state space may increase somewhat if not careful.

It's a bit of a loss, but perhaps not a big one:

  • Multiple pools already broke the guarantee from the perspective of any single pool.
  • sub() (e.g. replacing 1 with 0 as in Cortex Prime) also results in non-consecutive outcomes.
  • "Consecutive" has less meaning for non-integer outcomes.
  • Removing this guarantee may encourage clearer code.

Weaker guarantees that could be kept

  • Outcomes will be seen in monotonic order. Probably keep this one.
  • The same sequence of outcomes will be seen by all paths. Dropping this could improve performance in some cases, but I anticipate such gains to be inconsequential.

Try making `Counts` conform to standard container interfaces

Annoyingly for this case, ItemsView and KeysView are Sets, with set semantics for comparators, which means they are only partially ordered. Dice and pools are not totally ordered, but pool wants ordering for purposes of cache keys. Unless we store the cache key itself as a hashable set rather than a sorted tuple.

Update notebooks for 0.13

  • NCO
  • L5R widget performance: somehow making 100x more calls than running locally???
  • Isaksen2016 performance: reasonable performance after favoring cached direction more. #46
  • Redo Cortex Prime
  • Debug ContainsSubset
  • Fix triangleish. Removed for now pending resolution to #12.

Consider renaming max_outcomes (and min_outcomes)

outcomes vs. truncate

In favor of outcomes:

  • Intuitive in that one could imagine that the dice in the pool each have a max_outcome.
  • Easy to remember.

In favor of truncate:

  • More specific.
  • Less likely to be confused with the singular max_outcome.

The other word

  • max: The original.
  • top: Short, but I'd rather not introduce another synonym for "max".
  • right: I don't like this as another synonym for max, and it's already used for median and ppf with a different meaning. The one upside is that it seems to be the most common word used with truncation.
  • hi: "High" is already used to refer to dice rather than outcomes.
  • upper:
  • above:

Additional functools/itertools-like functionality

  • reduce
  • accumulate
  • Option to unpack tuples. Call it star or unpack? Candidates: reroll, reroll_until, sub, explode.
    • Decided to go with star since there is a minor potential ambiguity between * and **. Though it seems unlikely we'll use the latter.
  • Option to apply functions elementwise.
    • Decided not to do this for now; operators seem to cover foreseeable use cases.

More automatic algorithm selection for highest/lowest

Algorithm selection

  • If keep all -> sum
    • Not great for full roll enumeration (as opposed to sum) though; maybe fall back to pool algorithm in this case
    • Separate cases for identical or not
  • Elif keep one -> CDF/SF product
  • Elif dice are truncations of each other -> pool algorithm
  • Else -> reduce()-type algorithm

Method naming

In this general area, we currently have as methods of Die:

  • lowest and highest
  • keep_lowest and keep_highest
  • keep
  • repeat_and_sum

Thoughts:

  • Make lowest and highest free functions only. E.g. d20.highest(10) is poor syntax compared with highest(d20, 10).
  • repeat_and_keep? repeat_and_keep_lowest? Too long?
    • Went with the below option.
  • In the other direction, reduce repeat_and_sum to just sum()? Would this be too confusing with pool.sum() and the built-in sum()? Force users to use @ instead?
    • Decided to go with this, the mandatory arg helps prevent any errors.
    • Or maybe sum() on die returning itself is not too bad?
  • Also die.keep_highest -> just die.highest?
    • Reconsidered this, e.g. Roll20 uses keep_highest and there's an arg asymmetry with the free function version.
  • Drop keep entirely and have users do die.pool(args).sum() instead?
    • I decided to go with this after all; the keep syntax and use cases don't seem to be a worthwhile middle ground.
  • Do we need both sum and keep?
    • No more public die.sum(); this is covered by @ and die.keep().
  • Work in the [] syntax somehow? Too unusual.

Other

  • Common algorithm for both lowest and highest.
  • "Parallelogram" reduce()-like algorithm by summing near end to reduce states.

varargs forwarding for `sub` etc.

Added for sub(). Haven't decided on reroll(), explode() etc.

Another question is whether to add kwargs. The potential issue is that future additions to the keyword argument set could consume previously forwarded argument names. We could take a kwargs as a dict directly, though the syntax is less convenient.

Default pool size 1? Allow a die to act as a single-die pool?

Options:

No default num_dice

Error at pool creation time

This would mean that die.pool()[stuff] isn't allowed.

Error at pool use time

Late failure is not my favorite, but maybe this isn't too bad.

Default num_dice to 0

The current state.

Default num_dice to 1

Possibly more intuitive? May be more of a footgun?

Auto-convert dice to 1-die pools?

`namedtuple` support

Alternatively, a custom NamedVector class. We don't have the same motivation to special-case things here as we did for tuples since there is no shortened syntax for namedtuple.

  • Basic support
  • Marginals
  • Table headers
  • Allow creating "one-hot" representations from string outcomes
  • Binary operations

Named + unnamed could keep the names.

Different field orderings are counted as different?

  • star=2 to unpack by keyword. Restrictions on legal variable characters makes this less appealing.

Decision: Doesn't seem to be a point to doing this, just using . is easier.

  • one_hot conversions.

  • What to do in map? Extra argument to request preservation of type?

The tricky thing here is that the named tuple type might be buried inside one_hot etc.

  • Extra argument to constructors? Or method to add field names after the fact?

  • Add tests

Type hinting

Notes:

  • Short union syntax (X | Y) introduced in Python 3.10.

Decide what to do with nested tuple outcomes

Problem

Currently, ndim only refers to the top level. If a dimension is extracted from a VectorDie then it is ambiguous what the new ndim should be.

Options

Nested tuples are an error

Too harsh.

Store full ndim hierarchy

May be overly complex for the benefit.

Default to ndim=None

Default to ndim=icepool.Scalar

The state at at time of writing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.