Giter Site home page Giter Site logo

pykov's Introduction

Welcome to Pykov docs

Pykov is a tiny Python module on finite regular Markov chains.

You can define a Markov chain from scratch or read it from a text file according specific format. Pykov is versatile, being it able to manipulate the chain, inserting and removing nodes, and to calculate various kind of quantities, like the steady state distribution, mean first passage times, random walks, absorbing times, and so on.

Pykov is licensed under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.


Installation

Pykov can be installed/upgraded via pip

$ pip install git+git://github.com/riccardoscalco/Pykov@master #both Python2 and Python3
$ pip install --upgrade git+git://github.com/riccardoscalco/Pykov@master

Note that Pykov depends on numpy and scipy.


Getting started

Open your favourite Python shell and import pykov:

>>> import pykov
>>> pykov.__version__

###Vector class The Vector class inherits from python collections.OrderedDict, which means it has the same behaviors and features of python ordered dictionaries, with few exceptions. The states and the corresponding probabilities are the keys and the values of the dictionary, respectively.

collections.OrderedDict is used instead of the default dict because from Python 3.3 onwards, dict is non-deterministic for security reasons (see this stackoverflow question). For programs that needs determinism (such as simulations), an OrderedDict should be passed. But for programs where determinism is not an issue, dict can be used.

Definition of a pykov.Vector():

>>> p = pykov.Vector()

You can get and set states in many ways:

>>> p['A'] = .2
>>> p
{'A': 0.2}

>>> # Non-deterministic example
>>> p = pykov.Vector({'A':.3, 'B':.7})
>>> p
{'A':0.3, 'B':0.7}

>>> # Deterministic example
>>> data = collections.OrderedDict((('A', .3), ('B', .7)))
>>> p = pykov.Vector(data)
>>> p
{'A':0.3, 'B':0.7}

>>> pykov.Vector(A=.3, B=.6, C=.1)
{'A':0.3, 'B':0.6, 'C':0.1}

States not belonging to the vector have zero probability, moreover states with zero probability are not shown:

>>> q = pykov.Vector(C=.4, B=.6)
>>> q['C']
0.4
>>> q['Z']
0.0
>>> 'Z' in q
False

>>> q['Z']=.2
>>> q
{'C': 0.4, 'B': 0.6, 'Z': 0.2}
>>> 'Z' in q
True
>>> q['Z']=0
>>> q
{'C': 0.4, 'B': 0.6}
>>> 'Z' in q
False

Vector operations

Sum

A pykov.Vector() instance can be added or substracted to another pykov.Vector() instance:

>>> p = pykov.Vector(A=.3, B=.7)
>>> q = pykov.Vector(C=.5, B=.5)
>>> p + q
{'A': 0.3, 'C': 0.5, 'B': 1.2}
>>> p - q
{'A': 0.3, 'C': -0.5, 'B': 0.2}
>>> q - p
{'A': -0.3, 'C': 0.5, 'B': -0.2}
Product

A pykov.Vector() instance can be multiplied by a scalar. The dot product with another pykov.Vector() or pykov.Matrix() instance is also supported.

>>> p = pykov.Vector(A=.3, B=.7)
>>> p * 3
{'A': 0.9, 'B': 2.1}
>>> 3 * p
{'A': 0.9, 'B': 2.1}
>>> q = pykov.Vector(C=.5, B=.5)
>>> p * q
0.35
>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> p * T
{'A': 0.91, 'B': 0.09}
>>> T * p
{'A': 0.42, 'B': 0.3}

Vector methods

sum()

Sum the probability values.

>>> p = pykov.Vector(A=.3, B=.7)
>>> p.sum()
1.0
sort(reverse=False)

Return a list of tuples (state, probability) sorted according the probability values.

>>> p = pykov.Vector({'A':.3, 'B':.1, 'C':.6})
>>> p.sort()
[('B', 0.1), ('A', 0.3), ('C', 0.6)]
>>> p.sort(reverse=True)
[('C', 0.6), ('A', 0.3), ('B', 0.1)]
choose(random_func=None)

Choose a state at random, according to its probability.

>>> p = pykov.Vector(A=.3, B=.7)
>>> p.choose()
'B'
>>> p.choose()
'B'
>>> p.choose()
'A'

Optionally, if you need to supply your own random number generator, you can pass a function that takes two inputs (the min and max) and Pykov will use that function.

>>> def FakeRandom(min, max): return 0.01
>>> p = pykov.Vector(A=.05, B=.4, C=.4, D=.15)
>>> p.choose(FakeRandom)
'A'        
normalize()

Normalize the pykov.Vector, after normalization the probabilities sum to 1.

>>> p = pykov.Vector({'A':3, 'B':1, 'C':6})
>>> p.sum()
10.0
>>> p.normalize()
>>> p
{'A': 0.3, 'C': 0.6, 'B': 0.1}
>>> p.sum()
1.0
copy()

Return a shallow copy.

>>> p = pykov.Vector(A=.3, B=.7)
>>> q = p.copy()
>>> p['C'] = 1.
>>> q
{'A': 0.3, 'B': 0.7}
entropy()

Return the Shannon entropy, defined as $H(p) = \sum_i p_i \ln p_i$.

>>> p = pykov.Vector(A=.3, B=.7)
>>> p.entropy()
0.6108643020548935

For further details, have a look at Khinchin A. I., Mathematical Foundations of Information Theory Dover, 1957.

dist(q)

Return the distance to another pykov.Vector, defined as $d(p,q) = \sum_i | p_i - q_i |$.

>>> p = pykov.Vector(A=.3, B=.7)
>>> q = pykov.Vector(C=.5, B=.5)
>>> p.dist(q)
1.0
relative_entropy(q)

Return the Kullback-Leibler distance, defined as $d(p,q) = \sum_i p_i \ln (p_i/q_i)$.

>>> p = pykov.Vector(A=.3, B=.7)
>>> q = pykov.Vector(A=.4, B=.6)
>>> p.relative_entropy(q) #d(p,q)
0.02160085414354654
>>> q.relative_entropy(p) #d(q,p)
0.022582421084357485

Note that the Kullback-Leibler distance is not symmetric.


Matrix class

The pykov.Matrix() class inherits from python collections.OrderedDict. Similar to the default dict, OrderedDict keys are tuple of states, OrderedDict values are the matrix entries. Indexes do not need to be int, they can be string, as the states of a pykov.Vector().

Definition of pykov.Matrix():

>>> T = pykov.Matrix()

You can get and set items in many ways:

>>> T = pykov.Matrix()
>>> T[('A','B')] = .3
>>> T
{('A', 'B'): 0.3}
>>> T['A','A'] = .7
>>> T
{('A', 'B'): 0.3, ('A', 'A'): 0.7}

>>> # Non-deterministic example
>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T[('A','B')]
0.3
>>> T['A','B']
0.3

>>> # deterministic example
>>> data = collections.OrderedDict((
                (('A','B'), .3), 
                (('A','A'), .7), 
                (('B','A'), 1.)))
>>> T = pykov.Matrix(data)
>>> T[('A','B')]
0.3
>>> T['A','B']
0.3

Items not belonging to the matrix have value equal to zero, moreover items with value equal to zero are not shown:

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T['B','B']
0.0

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T
{('A', 'B'): 0.3, ('A', 'A'): 0.7, ('B', 'A'): 1.0}
>>> T['A','A'] = 0
>>> T
{('A', 'B'): 0.3, ('B', 'A'): 1.0}

Matrix operations

Sum

A pykov.Matrix() instance can be added or substracted to another pykov.Matrix() instance.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> I = pykov.Matrix({('A','A'):1, ('B','B'):1})
>>> T + I
{('B', 'A'): 1.0, ('A', 'B'): 0.3, ('A', 'A'): 1.7, ('B', 'B'): 1.0}
>>> T - I
{('B', 'A'): 1.0, ('A', 'B'): 0.3, ('A', 'A'): -0.3, ('B', 'B'): -1}
Product

A pykov.Matrix() instance can be multiplied by a scalar, the dot product with a pykov.Vector() or another pykov.Matrix() instance is also supported.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T * 3
{('B', 'A'): 3.0, ('A', 'B'): 0.9, ('A', 'A'): 2.1}

>>> p = pykov.Vector(A=.3, B=.7)
>>> T * p
{'A': 0.42, 'B': 0.3}

>>> W = pykov.Matrix({('N', 'M'): 0.5, ('M', 'N'): 0.7,
                      ('M', 'M'): 0.3, ('O', 'N'): 0.5,
                      ('O', 'O'): 0.5, ('N', 'O'): 0.5})
>>> W * W
{('N', 'M'): 0.15, ('M', 'N'): 0.21, ('M', 'O'): 0.35,
 ('M', 'M'): 0.44, ('O', 'M'): 0.25, ('O', 'N'): 0.25,
 ('O', 'O'): 0.5, ('N', 'O'): 0.25, ('N', 'N'): 0.6}

Matrix methods

states()

Return the set of states.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.states()
{'B', 'A'}

States without ingoing or outgoing transitions are removed from the set of states.

>>> T['A','C']=1
>>> T.states()
{'A', 'C', 'B'}
>>> T['A','C']=0
>>> T.states()
{'A', 'B'}
pred(key=None)

Return the precedessors of a state (if not indicated, of all states). In matrix notation, return the column of the indicated state.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.pred()
{'A': {'A': 0.7, 'B': 1.0}, 'B': {'A': 0.3}}
>>> T.pred('A')
{'A': 0.7, 'B': 1.0}
succ(key=None)

Return the successors of a state (if not indicated, of all states). In matrix notation, return the row of the indicated state.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.succ()
{'A': {'A': 0.7, 'B': 0.3}, 'B': {'A': 1.0}}
>>> T.succ('A')
{'A': 0.7, 'B': 0.3}
copy()

Return a shallow copy.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> W = T.copy()
>>> T[('B','B')] = 1.
>>> W
{('B', 'A'): 1.0, ('A', 'B'): 0.3, ('A', 'A'): 0.7}
remove(states)

Return a shallow copy of the matrix without the indicated states. All the links where the states appear are deleted, so that the result will not be in general a stochastic matrix.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.remove(['B'])
{('A', 'A'): 0.7}
>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.,
                     ('C','D'): .5, ('D','C'): 1., ('C','B'): .5})
>>> T.remove(['A','B'])
{('C', 'D'): 0.5, ('D', 'C'): 1.0}
stochastic()

Change the pykov.Matrix() instance in a right stochastic matrix. Set the sum of every row equal to one, raise PykovError if not possible.

>>> T = pykov.Matrix({('A','B'): 3, ('A','A'): 7, ('B','A'): .2})
>>> T.stochastic()
>>> T
{('B', 'A'): 1.0, ('A', 'B'): 0.3, ('A', 'A'): 0.7}

>>> T[('A','C')]=1
>>> T.stochastic()
pykov.PykovError: 'Zero links from node C'
transpose()

Return the transpose of the pykov.Matrix() instance.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.transpose()
{('B', 'A'): 0.3, ('A', 'B'): 1.0, ('A', 'A'): 0.7}
>>> T
{('A', 'B'): 0.3, ('A', 'A'): 0.7, ('B', 'A'): 1.0}
eye()

Return the Identity matrix.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.eye()
{('A', 'A'): 1., ('B', 'B'): 1.}
>>> type(T.eye())
<class 'pykov.Matrix'>
ones()

Return a pykov.Vector() instance with entries equal to 1.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.ones()
{'A': 1.0, 'B': 1.0}
>>> type(T.ones())
<class 'pykov.Vector'>
trace()

Return the matrix trace.

>>> T = pykov.Matrix({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.trace()
0.7

Chain class

The pykov.Chain class inherits from pykov.Matrix class. The OrderedDict key is a tuple of states, the OrderedDict value is the transition probability to go from the first state to the second state, in other words pykov describes the transitions of a Markov chain with a right stochastic matrix.

Chain methods

adjacency()

Return the adjacency matrix.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.adjacency()
{('B', 'A'): 1, ('A', 'B'): 1, ('A', 'A'): 1}
>>> type(T.adjacency())
<class 'pykov.Matrix'>
pow(p, n)

Find the probability distribution after n steps, starting from an initial pykov.Vector() p.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> p = pykov.Vector(A=1)
>>> T.pow(p,3)
{'A': 0.7629999999999999, 'B': 0.23699999999999996}
>>> p * T * T * T #not efficient
{'A': 0.7629999999999999, 'B': 0.23699999999999996}
move(state)

Do one step from the indicated state to one of its successors, chosen at random according to the transition probability. Return the new state.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.move('A')
'B'

Optionally, if you need to supply your own random number generator, you can pass a function that takes two inputs (the min and max) and Pykov will use that function.

>>> def FakeRandom(min, max): return 0.01
>>> T.move('A', FakeRandom)
'B'        
walk(steps, start=None, stop=None)

Return a random walk of n steps, starting and stopping at the indicated states. If not indicated, then the starting state is chosen according to the steady state probability. If the stopping state is reached before to do n steps, then the walker stops.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.walk(10)
['B', 'A', 'B', 'A', 'A', 'B', 'A', 'A', 'A', 'B', 'A']
>>> T.walk(10,'B','B')
['B', 'A', 'A', 'A', 'A', 'A', 'B']
walk_probability(walk)

Return the logarithm of the walk probability (see walk() method). Impossible walks have probability equal to zero.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.walk_probability(['A','A','B','A','A'])
-1.917322692203401
>>> probability = math.exp(-1.917322692203401)
>>> probability
0.147

>>> p = T.walk_probability(['A','B','B','B','A'])
>>> math.exp(p)
0.0
steady()

Return the steady state, i.e. the equilibrium distribution of the chain.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.steady()
{'A': 0.7692307692307676, 'B': 0.23076923076923028}

Since Pykov describes the chain with a right stochatic matrix, the steady state $x$ satisfies at the condition $p=pT$ and it is calculated with the inverse iteration method $Q^t x = e$, where $Q = I - T$ and $e = (0,0,...,1)$. Moreover, the Markov chain is assumed to be ergodic, i.e. the transition matrix must be irreducible and acyclic. You can easily test such properties by means of NetworkX, let's see how:

>>> import networkx as nx

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> G = nx.DiGraph(list(T.keys()))
>>> nx.is_strongly_connected(G) # is irreducible
True
>>> nx.is_aperiodic(G)
True

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','B'): 1.})
>>> G = nx.DiGraph(list(T.keys()))
>>> nx.is_strongly_connected(G) # is irreducible
False
>>> nx.is_aperiodic(G)
True

>>> T = pykov.Chain({('A','B'): 1, ('B','C'): 1., ('C','A'): 1.})
>>> G = nx.DiGraph(list(T.keys()))
>>> nx.is_strongly_connected(G)
True
>>> nx.is_aperiodic(G)
False

Often, Markov chains created from raw data are not irreducibles. In such cases, the Markov chain may be defined by means of the largest strongly connected component of the associated graph. Strongly connected components can be found with NetworkX:

>>> nx.strongly_connected_components(G)

For further details on the inverse iteration method, have a look at W. Stewart, Introduction to the Numerical Solution of Markov Chains, Princeton University Press, Chichester, West Sussex, 1994.

mixing_time(cutoff=0.25, jump=1, p=None)

Return the mixing time, defined as the number of steps needed to have $|pT^n - \pi| \lt 0.25$, where $\pi$ is the steady state $\pi = \pi T$.

If the initial distribution p is not indicated, then the iteration starts from the less probable state of the steady distribution. The parameter jump controls the iteration step, for example with jump=2 n has values 2,4,6,8,..

>>> d = {('R','R'):1./2, ('R','N'):1./4, ('R','S'):1./4,
         ('N','R'):1./2, ('N','N'):0., ('N','S'):1./2,
         ('S','R'):1./4, ('S','N'):1./4, ('S','S'):1./2}
>>> T = pykov.Chain(d)
>>> T.mixing_time()
2
entropy(p=None, norm=False)

Return the Chain entropy, defined as $H = \sum_i \pi_i H_i$, where $H_i=\sum_j T_{ij}\ln T_{ij}$. If p is not None, then the entropy is calculated with the indicated probability pykov.Vector().

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.entropy()
0.46989561696530169

With norm=True entropy belongs to [0,1].

>>> T.entropy(norm=True)
0.33895603665233132

For further details, have a look at Khinchin A. I., Mathematical Foundations of Information Theory, Dover, 1957.

mfpt_to(state)

Return the Mean First Passage Times of every state to the indicated state.

>>> d = {('R', 'N'): 0.25, ('R', 'S'): 0.25, ('S', 'R'): 0.25,
         ('R', 'R'): 0.5, ('N', 'S'): 0.5, ('S', 'S'): 0.5,
         ('S', 'N'): 0.25, ('N', 'R'): 0.5, ('N', 'N'): 0.0}
>>> T = pykov.Chain(d)
>>> T.mfpt_to('R')
{'S': 3.333333333333333, 'N': 2.666666666666667} #mfpt from 'S' to 'R' is 3.33

See also Kemeny J. G. and Snell, J. L., Finite Markov Chains. Springer-Verlag: New York, 1976.

absorbing_time(transient_set)

Mean number of steps needed to leave the transient_set, return the pykov.Vector() tau where tau[i] is the mean number of steps needed to leave the transient set starting from state i. The parameter transient_set is a subset of states (iterable).

>>> d = {('R','R'):1./2, ('R','N'):1./4, ('R','S'):1./4,
         ('N','R'):1./2, ('N','N'):0., ('N','S'):1./2,
         ('S','R'):1./4, ('S','N'):1./4, ('S','S'):1./2}
>>> T = pykov.Chain(d)
>>> p = pykov.Vector({'N':.3, 'S':.7})
>>> tau = T.absorbing_time(p.keys())
>>> tau
{'S': 3.333333333333333, 'N': 2.6666666666666665}

In other words, the mean number of steps in order to leave states 'S' and 'N' starting from 'S' is 3.33. It is sufficient to calculate p * tau in order to weight the mean times according an initial distribution p.

>>> p * tau
3.1333333333333329

In order to better understand the meaning of the method, the calculation of the previous example can be approximated by means of many random walkers:

>>> numpy.mean([len(T.walk(10000000,"S","R"))-1 for i in range(1000000)])
3.3326020000000001
>>> numpy.mean([len(T.walk(10000000,"N","R"))-1 for i in range(1000000)])
2.6665549999999998
absorbing_tour(p, transient_set=None)

Return a pykov.Vector() v, where v[i] is the mean time the process is in the transient state i before leaving the transient set.

Note that v.sum() is equal to p * tau (see absorbing_time() method). If not specified, the transient set is defined as the set of states in vector p.

>>> d = {('R','R'):1./2, ('R','N'):1./4, ('R','S'):1./4,
         ('N','R'):1./2, ('N','N'):0., ('N','S'):1./2,
         ('S','R'):1./4, ('S','N'):1./4, ('S','S'):1./2}
>>> T = pykov.Chain(d)
>>> p = pykov.Vector({'N':.3, 'S':.7})
>>> T.absorbing_tour(p)
{'S': 2.2666666666666666, 'N': 0.8666666666666669}
fundamental_matrix()

Return the fundamental matrix, have a look at Kemeny J. G. and Snell J. L., Finite Markov Chains. Springer-Verlag: New York, 1976 for further details.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.fundamental_matrix()
{('B', 'A'): 0.17751479289940991, ('A', 'B'): 0.053254437869822958,
('A', 'A'): 0.94674556213017902, ('B', 'B'): 0.82248520710059214}

Note that the fundamental matrix is not sparse.

kemeny_constant()

Return the Kemeny constant of the transition matrix.

>>> T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
>>> T.kemeny_constant()
1.7692307692307712

Utilities

Pykov comes with an utility useful to create a pykov.Chain() from a text file, let say file /mypath/mat, which contains the transition matrix defined with the following format:

A A .7
A B .3
B A 1

The pykov.Chain() instance is created with the command:

>>> P = pykov.readmat('/mypath/mat')
>>> P
{('B', 'A'): 1.0, ('A', 'B'): 0.3, ('A', 'A'): 0.7}

Docs written with StackEdit.

pykov's People

Contributors

adamilardi avatar adelq avatar awilsoncs avatar crazypython avatar ivuk avatar mrvaldez avatar naereen avatar rdbliss avatar riccardoscalco avatar snahor avatar spookylukey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pykov's Issues

Make a documentation on ReadTheDoc

Hi,

First, thanks and ๐Ÿ‘ for this small and cool project.
It's neat and well written, I love it!

Second, may I suggest that you write a Sphinx documentation and host it on ReadTheDocs please?
Based on the Markdown documentation that is already available in the README.md file, it could be very easy.

GitHub can be connected to ReadTheDocs to build the site automatically, for instance http://pykov.readthedocs.org/

Thanks in advance; I can't do it for you, but if you need any help, I can assist (I did a few docs on ReadTheDocs already, see this example or this one).

Couldn't import pykov

I installed "pykov" using pip without error.
Here below the outcome when i tried to import "pykov" in iteractive python:
Any idea? (thanks)


Python 2.7.3 (default, Feb 27 2014, 20:00:17)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import pykov
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/pykov.py", line 34, in
import six
ImportError: No module named six

TypeError in Chain.communication_classes()

When running Chain.communication_classes(), a TypeError is thrown: TypeError: unhashable type: 'set'.

The issue seems to be having nested sets, since the interior sets are mutable. One fix would be to use frozenset() for the nested sets. Alternatively, the return could be changed to a list of sets, nested lists, or something similar.

Exact error:

In [1]: run pykov.py

In [2]: Chain({(0, 1): 1, (1, 0): 1}).communication_classes()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-2-cc5c171ee32c> in <module>()
----> 1 Chain({(0, 1): 1, (1, 0): 1}).communication_classes()

/home/rwb/python/Pykov/pykov.py in communication_classes(self)
   1347                 if row[0, el] == 1:
   1348                     comm_class.add(pos2el[el])
-> 1349             res.add(comm_class)
   1350         return res
   1351 

TypeError: unhashable type: 'set'

Steady function

According to the following matrix implemented in a file fn and loaded using readmat function, the steady function randomly return a list of 0. I use python in version 3.7.

R R .5
R N .25
R S .25
N R .5
N N 0
N S .5
S R .25
S N .25
S S .5

Markov matrix

P = pykov.readmat(fn)
Pi = P.steady()
assert(Pi.sum()!=0)

Thanks in advance.

Error when an item in pykov.Chain is None

Hey,
first of all this is my first time using github and also i'm very new to python so be gentle with me..
I have a pykov.Chain where one of the items is None and when i'm trying to use the function chain.move(None) i get the following error:
AttributeError: 'OrderedDict' object has no attribute 'choose'
to fix it i did the following changes in the succ function (i only use positive numbers in my project):
from:

def succ(self, key=None):
        try:
            if key is not None:
                return self._succ[key]
            else:
                return self._succ
        except AttributeError:
            self._succ = OrderedDict([(state, Vector()) for state in self.states()])
            for link, probability in six.iteritems(self):
                self._succ[link[0]][link[1]] = probability
            if key is not None:
                return self._succ[key]
            else:
                return self._succ

to:

def succ(self, key=-1): 
        try:
            if key is not -1:
                return self._succ[key]
            else:
                return self._succ
        except AttributeError:
            self._succ = OrderedDict([(state, Vector()) for state in self.states()])
            for link, probability in six.iteritems(self):
                self._succ[link[0]][link[1]] = probability
            if key is not -1:
                return self._succ[key]
            else:
                return self._succ

My solution works for me, but is there a better way?

Last letter in the chain gets a transition probability with itself of 1

import pykov
import math
init_vector, matrix = pykov.maximum_likelihood_probabilities('99a')
math.exp(matrix.walk_probability('99'))
Out[26]: 0.5
math.exp(matrix.walk_probability('ee'))
Out[27]: 0.0
math.exp(matrix.walk_probability('aa'))
Out[28]: 1.0
matrix
Out[30]: {('9', '9'): 0.5, ('9', 'a'): 0.5, ('a', 'a'): 1.0}

Do you know why you get this extra transition a->a=1 ?

Git commit:
9dc9181

Choose function throws error

File "/Library/Python/2.7/site-packages/pykov.py", line 1087, in walk
result.append(self.move(result[-1]))
File "/Library/Python/2.7/site-packages/pykov.py", line 925, in move
return self.succ(state).choose(random_func)
File "/Library/Python/2.7/site-packages/pykov.py", line 314, in choose
return state
UnboundLocalError: local variable 'state' referenced before assignment

steady method of pykov.Chain method doesn't work

I tried the example of the tutorial:

T = pykov.Chain({('A','B'): .3, ('A','A'): .7, ('B','A'): 1.})
T.steady()
here below the error:
In [9]: T.steady()


TypeError Traceback (most recent call last)
/home/hakim/myexperiments/tastp/ in ()
----> 1 T.steady()

/usr/local/lib/python2.7/dist-packages/pykov.pyc in steady(self)
938 m = len(e2p)
939 P = self.dok(e2p).tocsr()
--> 940 Q = ss.eye(m, format='csr') - P
941 e = numpy.zeros(m)
942 e[-1] = 1.

TypeError: eye() takes at least 2 arguments (2 given)

TypeError: eye() takes at least 2 arguments (2 given)

Traceback (most recent call last):
File "organelle.py", line 1325, in
composer()
File "organelle.py", line 1044, in composer
phrase_lengths.append(P.kemeny_constant())
File "/usr/local/lib/python2.7/dist-packages/pykov.py", line 1270, in kemeny_constant
Z = self.fundamental_matrix()
File "/usr/local/lib/python2.7/dist-packages/pykov.py", line 1246, in fundamental_matrix
p = self.steady()._toarray(el2pos)
File "/usr/local/lib/python2.7/dist-packages/pykov.py", line 971, in steady
Q = ss.eye(m, format='csr') - P
TypeError: eye() takes at least 2 arguments (2 given)

What should I check first?

Steady() gives incorrect results

Hi there. I'm pretty impressed by this library and tried to integrate it to my research code. However I got different results from what I already had and checked the same chains in mathematica to compare results. Results from mathematica agree with mine and disagree with pykov's.

I've seen the code for 'steady' and it didn't look clear at all to me and I don't have access to the books you mentioned.

This difference in result is probably bug in the library implementation. It would be great if there was a test suite added to library with typical and edge cases for different operation on Markov Chains.

Dictionary changes size

I've seen this here and there in my Organ Donor code.

2_track_contents.txt will become a markov object and a new music object
The Kemeny constant of the track contents transition table is 36.8458389735
A phrase of kemeny length from this transition table is: ['rest', '76', 'rest', '86', 'rest', '52', '76', 'rest', '86', 'rest', '88', 'rest', '52', 'rest', '59', '52', 'rest', '79', 'rest', '81', 'rest', '79', 'rest', '92', 'rest', '59', '61', 'rest', '76', 'rest', '67', 'rest', '57', 'rest', '81', 'rest', '52']
2_track_notes.txt will become a markov object and a new music object
2_rest_durations.txt will become a markov object and a new music object
if there aren't any rests to give a transition table, don't try to write one.
2_note_durations.txt will become a markov object and a new music object
/Library/Python/2.7/site-packages/scipy-0.14.0.dev_7cefb25-py2.7-macosx-10.9-intel.egg/scipy/sparse/compressed.py:686: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
The Kemeny constant of the note durations transition table is
Traceback (most recent call last):
File "organelle.py", line 1445, in
composer()
File "organelle.py", line 1126, in composer
print "The Kemeny constant of the note durations transition table is", R.kemeny_constant()
File "/Library/Python/2.7/site-packages/pykov.py", line 1402, in kemeny_constant
Z = self.fundamental_matrix()
File "/Library/Python/2.7/site-packages/pykov.py", line 1379, in fundamental_matrix
p = self.steady()._toarray(el2pos)
File "/Library/Python/2.7/site-packages/pykov.py", line 1093, in steady
res.normalize()
File "/Library/Python/2.7/site-packages/pykov.py", line 269, in normalize
for k in self.iterkeys():
RuntimeError: dictionary changed size during iteration

What should I check first?

Pypi

Just wondering, why not upload to pypi? It's really simple and makes installing as a dependency slightly easier.

Empty Strings as keys causing causing exceptions

When you add a empty string as a key to the chain ex:
P[('','Bb')]=.3
and then call
P.move('')

pykov will thrown an exception:

Traceback (most recent call last):
  File "pykoy_tests.py", line 102, in <module>
    print P.move('')
  File "pykov.py", line 907, in move
    return self.succ(state).choose() 
AttributeError: 'dict' object has no attribute 'choose'

This is cause by the if statements in def succ() such as: https://github.com/riccardoscalco/Pykov/blob/master/pykov.py#L643

Where it should be comparing for against None.

The use case I found this in is generating chains from English sentences where a empty string can indicate the beginning or end of chain or 'sentence'. Now I could be wrong in my usage but I thought I would bring it to your attention.

Thanks

Proposing a PR to fix a few small typos

Issue Type

[x] Bug (Typo)

Steps to Replicate and Expected Behaviour

  • Examine README.md and observe substracted, however expect to see subtracted.
  • Examine README.md, pykov.py and observe precedessors, however expect to see predecessors.
  • Examine pykov.py and observe coloum, however expect to see column.

Notes

Semi-automated issue generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

To avoid wasting CI processing resources a branch with the fix has been
prepared but a pull request has not yet been created. A pull request fixing
the issue can be prepared from the link below, feel free to create it or
request @timgates42 create the PR. Alternatively if the fix is undesired please
close the issue with a small comment about the reasoning.

https://github.com/timgates42/Pykov/pull/new/bugfix_typos

Thanks.

Chain.walk(start=0) chooses a random start state

Calling Chain.walk() with the parameter start=0 chooses a random start state from the chain.
MWE:

chain = pykov.Chain({(0, 1): 1, (1, 0): 1})
chain.walk(0, 0) # Repeat a few times to get a [1].

The relevant line is here:

if not start:
    start = self.steady().choose()

This could be fixed by checking for start == None and stop == None, I think. If 0 is excluded by design, can a note be added to the documentation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.