Giter Site home page Giter Site logo

jahmm's People

Watchers

 avatar  avatar

jahmm's Issues

product of probabilities as opposed to sum of logs

Hi, 
First, I have to say that this package is really great and very well
written. However, I have one main issue with it that made me stop using it:

The algorithms use raw probabilities [0-1] as opposed to log probabilities
so "Product" is used instead of "Sum of logs". When I use this package with
a lot of features and a lot of hidden states the algorithms crash.. I get
NaN values because it divides extremely small numbers by extremely small
numbers and sometimes it divides 0 by 0.

The package works well if you have a small data set with small states.

It would be very much appreciated and very helpful if this package is
converted to log probablities everywhere (A matrix, pdf estimation, 
decoding algorithms, etc)

One more thing, will this package support multivariant Gaussian mixtures?

Thanks so much!
F.


Original issue reported on code.google.com by [email protected] on 19 Mar 2009 at 9:41

Feature Request: HMM Inference

What steps will reproduce the problem?
1.
2.
3.

What is the expected output? What do you see instead?
I would like to see the exposure of the backward and forward algorithms to the 
CLI for HMM inference, including determine the backward, forward and posterior 
probabilities of states given sequence symbol, as the 
backward/forward/posterior function in the R's HMM package (see link below).

What version of the product are you using? On what operating system?
0.6.1; Windows 7


Please provide any additional information below.
http://cran.r-project.org/web/packages/HMM/HMM.pdf

Original issue reported on code.google.com by [email protected] on 18 Jul 2011 at 6:06

How to convert Type ObservationDiscrete<enum> to ObservatinInteger

I have no idea how to convert  ObservationDiscrete<enum> type to
ObservatinInteger type.

Because I want to use package like this:
1 build  HMM
2 generate 10000 random observation sequence
3 get init HMM using K_Means algorithm .
4 Baum_Welch algorithm ...

KMeansLearner import ObservatinInteger but not ObservationDiscrete<enum>.If
I want to use KMeansLearner ,I have to convert these two types.

Thanks for your help!

smallycat

Original issue reported on code.google.com by [email protected] on 15 Apr 2009 at 12:55

Cli.java won't compile

What steps will reproduce the problem?
1. go into cli directory
2. compile Cli.java with javac Cli.java

What is the expected output? What do you see instead?

A Cli.class file is expected but i get the following:

Cli.java:27: package be.ac.ulg.montefiore.run.jahmm.io does not exist
import be.ac.ulg.montefiore.run.jahmm.io.FileFormatException;
                                        ^
Cli.java:54: cannot find symbol
symbol  : class AbnormalTerminationException

+ 9 more errors


What version of the product are you using? On what operating system?

I am using jahmm-0.6.1 on Ubuntu 10.04

Please help with this!


Original issue reported on code.google.com by [email protected] on 29 Mar 2012 at 2:08

error reading values like "5.07856133975082E-4" from files

What steps will reproduce the problem?
1. generate (vector) sequences
2. write sequences to file
3. read sequences from file

What is the expected output? What do you see instead?
reading an vector sequence works fine in general. but when there is a
vector like "[ 5.07856133975082E-4 2.221522656000841 ];" to read, the
system tells you "Line 49: Number or ']' expected" since the parsing
function checks for beeing a number. ("E" and "-" are no numbers :) )

What version of the product are you using? On what operating system?
ObservationVectorReader.java - v.0.6.1

Please provide any additional information below.
"new MarkovGenerator<ObservationVector>(hmm).observationSequence(length)"
SOMETIMES returns a value like "5.07856133975082E-4" which cant be read
from a file later. so the makrovGenerator should stop at a certain length
of number or (better): reading those numbers with "E-x" should be fixed...


kind regards,
Ben

Original issue reported on code.google.com by [email protected] on 15 May 2010 at 6:46

Problem with counters on estimateGamma

What steps will reproduce the problem?
1. gamma[xi.length-1][j] and gamma[xi.length][j] are updated with the same
values
2. Check file BaumWelchLearner.java
3.

What is the expected output? What do you see instead?

I expected something like

for (int t = 0; t < xi.length; t++)
            for (int i = 0; i < xi[0].length; i++)
                for (int j = 0; j < xi[0].length; j++)
                    gamma[t+1][i] += xi[t][i][j];

and the next loop omitted

What version of the product are you using? On what operating system?

0.6.1 on java 1.6.b16

Please provide any additional information below.

Please consult Rabiner's book

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 10:42

Problem with the iteration function in the BaumWelch class plz help i am behind schedule

What steps will reproduce the problem?
1. when i give the function obervations 
 then if the current state produce probability zero for this observation
it produce this value in all the observations probability for that state  �

What is the expected output? What do you see instead?
IT MUST give the same probability of the observations of  the current state as 
it was before

What version of the product are you using? On what operating system?
the version is 0.6.1 Operating system Fedora 15 

Please provide any additional information below.
i attached the txt file of the charachter model and the output after learning 
i am working on Arabic Handwriting project plz help me as fast as u can because 
its my graduation project and i must deliver the project bec i am behind the 
schedule 

Original issue reported on code.google.com by [email protected] on 31 May 2011 at 10:01

Attachments:

HmmReader class cannot the exponential formed values.

What steps will reproduce the problem?
HMMReader class cannot read the observation symbol probability formatted with 
exponential 
form from the HMM file. 

I show the HMM file.
--------------------------------------------
Hmm v1.0

NbStates 4

State
Pi 0.25
A 0.25 0.25 0.25 0.25 
IntegerOPDF [0.5775083500334324 0.42158984635940894 9.018036072144319E-4 0.0 ]


State
Pi 0.25
A 0.25 0.25 0.25 0.25 
IntegerOPDF [0.5775083500334324 0.42158984635940894 9.018036072144319E-4 0.0 ]


State
Pi 0.25
A 0.25 0.25 0.25 0.25 
IntegerOPDF [0.5775083500334324 0.42158984635940894 9.018036072144319E-4 0.0 ]


State
Pi 0.25
A 0.25 0.25 0.25 0.25 
IntegerOPDF [0.5775083500334324 0.42158984635940894 9.018036072144319E-4 0.0 ]
--------------------------------------------

What is the expected output? What do you see instead?
I would like you to support HMMReader can read the values formatted with 
exponetial form.

What version of the product are you using? On what operating system?
jahmm-0.6.2.jar MacOS X 10.5.8

Please provide any additional information below.
I implement the function of reading the values with formatted with exponential 
form. But, I think 
there are some bugs in my implementation. I fix read(StreamTokenizer st, int 
length) method in 
OpdfReader class. I'm looking forwad to supporting the exponential form in the 
official 
implementation.

    /**
     * Reads a sequence of numbers.  The sequence is between brackets
     * and numbers are separated by spaces.  Empty array are not allowed.
     * 
     * @param st The tokenizer to read the sequence from.
     * @param length The expected length of the sequence or a strictly negative
     *        number if it must not be checked.
     * @return The array read.
     */
    static protected double[] read(StreamTokenizer st, int length)
    throws IOException, FileFormatException
    {
        LinkedList<String> l = new LinkedList<String>();

        HmmReader.readWords(st, "[");

                while ((st.nextToken() == StreamTokenizer.TT_NUMBER)
                        || (st.ttype == StreamTokenizer.TT_WORD)) {
                    String token = null;

                    if(st.ttype == StreamTokenizer.TT_NUMBER) {
                        token = Double.toString(st.nval);
                    }

                    if(st.ttype == StreamTokenizer.TT_WORD) {
                        String word = st.sval;
                        String nval = l.removeLast();

                        token = nval + word;
                    }

                    l.addLast(token);
                }

        st.pushBack();
        HmmReader.readWords(st, "]");

        if (length >= 0 && l.size() != length)
            throw new FileFormatException(st.lineno(),
                    "Wrong length of number sequence");

        if (l.size() == 0)
            throw new FileFormatException(st.lineno(),
                    "Invalid empty sequence");

        double[] a = new double[l.size()];
        for (int i = 0; i < a.length; i++) {
                    try {
            a[i] = Double.parseDouble(l.get(i));
                    } catch(NumberFormatException e) {
                        e.printStackTrace();
                    }
                }

        return a;
    }

Original issue reported on code.google.com by [email protected] on 17 Aug 2009 at 2:04

Attachments:

LIcence clarification

Dear author,

We used your code to implement TAXOMO, a taxonomy-driven modeler for
sequence mining. I would like to make the code publicly available.  

I contacted you some time ago for a clarification about the type of license
JAHMM is distributed. On the download page I found that  the code is
licensed under a BSD license, but looking in the underlying source file ,
the license file in the code says its licensed under the GPL (I looked in
the v0.6.1 src zip file). 

I can not license my code since this problem. Could you please confirm that
all the code is licensed under the BSD license.

Thanks a lot, 
Debora Donato

Original issue reported on code.google.com by [email protected] on 8 Apr 2010 at 7:02

setPi error - Syntax errors on tokes.

What steps will reproduce the problem?
1. I imported the Hmm
2. I created an Hmm object(without having errros).
3. When using hmm.setPi(0, 0.95); the error appeared.

What is the expected output? What do you see instead?
No info about that. I try to use it for the first time.


What version of the product are you using? On what operating system?
I am using the 0.6.2 version of the jar file. I also tried it with 0.6.0. I use 
it in Eclipse 3.6.2 on Windows 7. 

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 3 Feb 2012 at 1:48

Design error on ForwardBackwardScaledCalculator -- field not accesible

What steps will reproduce the problem?
1.
2.
3.

What is the expected output? What do you see instead?
This is a design error for the class ForwardBackwardScaledCalculator.

The variable ctFactors  is declared private, and does not have a getter. 
The problem is, that the ctFactors variable is required to calculate the 
posterior probability of a hidden state given the observation.


What version of the product are you using? On what operating system?


Please provide any additional information below.

See equation 11 in:
http://xenia.media.mit.edu/~rahimi/rabiner/rabiner-errata/rabiner-errata.html
To see how the variable in fact is needed.

Original issue reported on code.google.com by [email protected] on 15 Aug 2012 at 9:18

Reading HMM error "Line 6: Syntax error: unexpected token ',' (A' expected)"

What steps will reproduce the problem?
1. write hmm to textual file
2. read hmm from textual file

What is the expected output? What do you see instead?
It is expected that reading the hmm from a file works but it doesn't
because of the error "Line 6: Syntax error: unexpected token ',' (A' expected)"
"," is used as decimal seperator when exporting an hmm to a file. But when
reading back in this exports, jahmm doesn't understand "," as a decimal
seperator and throws an error.

Please provide any additional information below.
You can fix the reading function OR the writing function to avoid this problem


I changed the writing functions this way:

Add this code:
        DecimalFormatSymbols dfs = formatter.getDecimalFormatSymbols();
        dfs.setDecimalSeparator('.');
        formatter.setDecimalFormatSymbols(dfs);

in OpdfWriter.java:60 AND HmmWriter:63

Now it uses "." instead of "," as decimal seperator when writing an export
to a file.

(I did only test it with my application (ObservationVector and
MultiGaussianDistribution...)

kind regards,
Ben

Original issue reported on code.google.com by [email protected] on 3 Jun 2010 at 4:25

Add Viterbi decoding to CLI

This is a feature request. It would be really great to add Viterbi decoding to 
the command-line interface! (forward/backward scores for a given sequence would 
also be great too).

Great job on the package, this is outstanding!

Original issue reported on code.google.com by [email protected] on 17 Jan 2011 at 1:42

Compute likelihood during iteration / allow convergence to a absolute or relative tolerance.

What steps will reproduce the problem?

It seems there is no way to control the iteration of the Baum-Welch learner to 
compute the likelihood of the current parameters as the algorithm iterates. 
This is aggravated by the fact that there is there is no straightforward way to 
write the likelihood function in a subclass because the relevant parameters are 
buried deep inside the iterate function. 

What is the expected output? What do you see instead?

Most learning algorithms allow the likelihood to be computed during each 
iteration. This allows local convergence to be perfectly checked during EM, as 
well as verify correctness because it should increase monotonically. The KL 
measurement thingy is only a crude approximation to this.

What version of the product are you using? On what operating system?

jahmm 0.6.2 on Ubuntu 12.04

Please provide any additional information below.

I wish you guys would host this project on GitHub instead of Google code. Then 
it would be easier for people like me to fork the project, implement things 
like this, and then send a pull request with updates.

Original issue reported on code.google.com by [email protected] on 16 Sep 2013 at 2:16

NaN ?????

Hello all,

My Observation sequences  can be read from a file, I have 501 observations such 
as the length of  the longest sequence is 9 

My initial HMM is as follows:

static Hmm<ObservationInteger > buildInitHmm()
    {   

        Hmm <ObservationInteger > hmm = new Hmm <ObservationInteger >(9,new OpdfIntegerFactory (9));


        hmm.setPi (0, 1);
        hmm.setPi (1, 0);
        hmm.setPi (2, 0);
        hmm.setPi (3, 0);
        hmm.setPi (4, 0);
        hmm.setPi (5, 0);
        hmm.setPi (6, 0);
        hmm.setPi (7, 0);
        hmm.setPi (8, 0);

        hmm.setOpdf (0, new OpdfInteger (new double [] {17.68,57.26,0.0,10.74,0.0,13.52,0.0,0.0,0.20,0.0,0.40,0.0,0.20,0.0,0.0}));
        hmm.setOpdf (1, new OpdfInteger (new double [] {24,27.98,0.20,6.35,0.0,8.14,0.0,25.80,5.15,0.0,0.40,0.20,1.78,0.0,0.0}));
        hmm.setOpdf (2, new OpdfInteger (new double [] {21.76,26.99,0.0,10.25,1.26,11.93,0.84,12.97,10.65,0.0,0.84,0.21,2.30,0.0,0.0}));
        hmm.setOpdf (3, new OpdfInteger (new double [] {28.57,21.82,2.08,19.48,3.64,9.87,1.30,9.35,2.59,0.0,0.78,0.0,0.52,0.0,0.0}));
        hmm.setOpdf (4, new OpdfInteger (new double [] {38.37,15.87,2.58,14.76,2.21,12.92,1.11,10.70,0.37,0.0,0.0,0.37,0.74,0.0,0.0}));
        hmm.setOpdf (5, new OpdfInteger (new double [] {22.87,25.71,1.14,12,0.57,16.57,0.57,20,0.57,0.0,0.0,0.0,0.0,0.0,0.0}));
        hmm.setOpdf (6, new OpdfInteger (new double [] {33.79,25.35,1.41,8.45,0.0,15.49,0.0,15.49,0.0,0.0,0.0,0.0,0.0,0.0,0.0}));
        hmm.setOpdf (7, new OpdfInteger (new double [] {10.71,50,0.0,3.57,0.0,17.86,0.0,17.86,0.0,0.0,0.0,0.0,0.0,0.0,0.0}));
        hmm.setOpdf (8, new OpdfInteger (new double [] 
{0,50,0.0,0.0,0.0,0.0,0.0,50,0.0,0.0,0.0,0.0,0.0,0.0,0.0}));



        hmm.setAij (0, 1, 0.5);
        hmm.setAij (0, 0, 0.5);
        hmm.setAij (1, 2, 0.5);
        hmm.setAij (1, 1, 0.5);
        hmm.setAij (2, 3, 0.5);
        hmm.setAij (2, 2, 0.5);
        hmm.setAij (3, 4, 0.5);
        hmm.setAij (3, 3, 0.5);
        hmm.setAij (4, 5, 0.5);
        hmm.setAij (4, 4, 0.5);
        hmm.setAij (5, 6, 0.5);
        hmm.setAij (5, 5, 0.5);
        hmm.setAij (6, 7, 0.5);
        hmm.setAij (6, 6, 0.5);
        hmm.setAij (7, 8, 0.5);
        hmm.setAij (7, 7, 0.5);
        hmm.setAij (8, 8, 1);


        return hmm;
    }


After having introduced these parameters to "BaumWelchLearner", the system 
generates a new HMM  or all states have the following values:

State 0
   Pi: NaN
   Aij:? ? ? ? ? ? ? ? ?
   Opdf: Integer distribution --- 0? ? ? ? ? ? ? ? ? 0 0 0 0 0


Please tell me the cause of this problem and help me to correct it.

Original issue reported on code.google.com by [email protected] on 20 Nov 2012 at 1:37

Decoding results in sequence of 0s

What steps will reproduce the problem?
1. Train on a large vocab corpus. In my case it was 15000 after replacing words 
with freq < 2 to OOV (with 80 states)
2. Decode on the same corpus.
3. Some of the sentences will have a long sequence of 0s (usually towards the 
end)

What is the expected output? What do you see instead?
Expected to see different states for different vocabs. Instead, a long sequence 
of state 0's are seen.

What version of the product are you using? On what operating system?
version 0.6.1 on Ubuntu 12.04

Please provide any additional information below.
You can see the states decoded for the whole corpus below (search for a 
sequence of 0's).

Original issue reported on code.google.com by [email protected] on 30 Jul 2012 at 4:31

Attachments:

Funny charc in mean values

Sometimes i get funny characters in the model:

digraph G {
    0 -> 0 [label=0.5];
    0 -> 6 [label=0.5];
    1 -> 6 [label=1];
    2 -> 1 [label=0.1];
    2 -> 2 [label=0.3];
    2 -> 4 [label=0.1];
    2 -> 6 [label=0.3];
    2 -> 8 [label=0.1];
    2 -> 9 [label=0.1];
    3 -> 2 [label=1];
    4 -> 2 [label=0.09];
    4 -> 4 [label=0.27];
    4 -> 6 [label=0.09];
    4 -> 7 [label=0.45];
    4 -> 12 [label=0.09];
    5 -> 5 [label=0.43];
    5 -> 6 [label=0.57];
    6 -> 0 [label=0.11];
    6 -> 2 [label=0.11];
    6 -> 3 [label=0.06];
    6 -> 4 [label=0.11];
    6 -> 5 [label=0.11];
    6 -> 6 [label=0.39];
    6 -> 7 [label=0.06];
    6 -> 11 [label=0.06];
    7 -> 4 [label=0.12];
    7 -> 6 [label=0.06];
    7 -> 7 [label=0.41];
    7 -> 8 [label=0.18];
    7 -> 9 [label=0.06];
    7 -> 12 [label=0.18];
    8 -> 2 [label=0.12];
    8 -> 5 [label=0.25];
    8 -> 7 [label=0.12];
    8 -> 8 [label=0.12];
    8 -> 9 [label=0.12];
    8 -> 12 [label=0.25];
    9 -> 7 [label=0.25];
    9 -> 9 [label=0.25];
    9 -> 10 [label=0.5];
    10 -> 10 [label=1];
    11 -> 1 [label=1];
    12 -> 4 [label=0.15];
    12 -> 7 [label=0.08];
    12 -> 8 [label=0.23];
    12 -> 10 [label=0.08];
    12 -> 12 [label=0.46];
    0 [shape=circle, label="0 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    1 [shape=circle, label="1 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    2 [shape=circle, label="2 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    3 [shape=circle, label="3 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    4 [shape=circle, label="4 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    5 [shape=circle, label="5 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    6 [shape=circle, label="6 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    7 [shape=circle, label="7 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    8 [shape=circle, label="8 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    9 [shape=circle, label="9 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    10 [shape=circle, label="10 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    11 [shape=circle, label="11 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
    12 [shape=circle, label="12 - [ Multi-variate Gaussian distribution --- Mean: [ � � � � � � � � � ] ]"];
}

what could be the reason. please help.


thanks
asutosh


Original issue reported on code.google.com by [email protected] on 5 Oct 2010 at 5:45

Reading HMM (with OpdfMultiGaussian) error "Line 8: Syntax error: unexpected token ']' (State' expected)"

What steps will reproduce the problem?
1. do textual export of your hmm (using OpdfMultiGaussian)
2. try to read your textual export again (with OpdfMultiGaussian)


What is the expected output? What do you see instead?
the same hmm that was exportet... but there is an error while reading in
the OpdfMultiGaussian. Jahmm needs to read one more "]"

Please provide any additional information below.
error output was: "Line 8: Syntax error: unexpected token ']' (State'
expected)"

To fix this problem, add the following line in OpdfMultiGaussian.java:73
HmmReader.readWords(st, "]");

kind regards,
Ben

Original issue reported on code.google.com by [email protected] on 3 Jun 2010 at 3:49

KMeansLearner taking inordinately long

Hello,

I want to train a HMM with ObservationVector of size 9, sequence of size 5, 
sequeces of size 500 and no of states =10. 

example ObservationVector :
{1.377,10.0,0.0,301.5,1214.5685016666664,315203.6001666666,3.5076600116666667,20
.395,415.0},
{1.158,10.0,0.0,381.9,1214.5180483333336,345942.0825000001,3.950539710000001,20.
7745,415.0},
{0.972,10.0,0.0,321.6,1207.6763050000002,348111.70533333323,5.423664218333333,21
.9235,415.0},
{1.223,10.0,0.0,321.6,1211.0414300000002,348027.776,5.535122003333333,22.4065,41
5.0},
{1.062,10.0,0.0,381.9,1214.4290457627121,347750.5875,5.732363891666665,23.017,41
5.0},
{0.764,10.0,0.0,405.0,1212.8318266666668,355970.02233333327,5.565356888333333,22
.0345,415.0},
{0.643,10.0,0.0,425.25,1212.933148333333,356178.29949999996,5.563183511666667,22
.0185,415.0},
{0.726,10.0,0.0,344.76,1214.2682116666667,355337.1081666666,5.6176908999999995,2
1.884,415.0},
{0.875,10.0,0.0,405.6,1213.3473783333336,352905.9781666667,5.613016170000001,22.
504,415.0},
{0.806,10.0,0.0,365.04,1213.3571433333332,348112.71266666666,5.706252395000002,2
2.6875,415.0},
{0.75,10.0,0.0,385.32,1214.16201,355867.2406666667,5.5668656883333325,22.8065,41
5.0},
{0.617,10.0,0.0,384.18,1213.9854100000007,356745.9403333332,5.529165541666666,22
.4415,415.0},
{0.595,10.0,0.0,408.0,1213.674531666667,356314.3568333333,5.576403996666668,22.2
145,415.0},

The program virtually hangs after printing numberOfHiddenStates: 10.

please help.
====
public Hmm<ObservationVector> learnHMM(List<List<ObservationVector>> sequences) 
{
        int numberOfHiddenStates = 10;
        Hmm<ObservationVector> trainedHmm = null;
        do {
            System.out.println("numberOfHiddenStates: "+numberOfHiddenStates);
            KMeansLearner<ObservationVector> kml = new KMeansLearner<ObservationVector>(numberOfHiddenStates,
                    new OpdfMultiGaussianFactory(obsVectorSize), sequences);
            trainedHmm = kml.learn();
            BaumWelchLearner bwl = new BaumWelchLearner();
            bwl.setNbIterations(20);
            trainedHmm = bwl.learn(trainedHmm, sequences);
            numberOfHiddenStates++;
        } while (Double.isNaN(trainedHmm.getPi(0)) && numberOfHiddenStates< 20);

        return trainedHmm;
    }

Original issue reported on code.google.com by [email protected] on 1 Oct 2010 at 4:46

I can't get the KullbackLeiblerDistance

Hey guys, 

I'm trying to implement a simple program with Jahmm. I create two 
Hmm<ObservationInteger> variables, I introduce their data in order to model the 
HMM and I try to see their difference with these lines:

KullbackLeiblerDistanceCalculator cal = new KullbackLeiblerDistanceCalculator();
double dist = cal.distance(learntHmm, originalHmm);

But I can't compile it due this error:

method distance in class 
be.ac.ulg.montefiore.run.jahmm.toolbox.KullbackLeiblerDistanceCalculator cannot 
be applied to given types;
  required: be.ac.ulg.montefiore.run.jahmm.Hmm<O>,be.ac.ulg.montefiore.run.jahmm.Hmm<? super O>
  found: be.ac.ulg.montefiore.run.jahmm.Hmm<capture#11 of ?>,be.ac.ulg.montefiore.run.jahmm.Hmm<be.ac.ulg.montefiore.run.jahmm.ObservationInteger>
  reason: no instance(s) of type variable(s) O exist so that argument type be.ac.ulg.montefiore.run.jahmm.Hmm<be.ac.ulg.montefiore.run.jahmm.ObservationInteger> conforms to formal parameter type be.ac.ulg.montefiore.run.jahmm.Hmm<? super O>

Any suggestions?


Thanks in advance,

Original issue reported on code.google.com by [email protected] on 5 Nov 2011 at 1:46

Matrix is not full rank - error when there is zero variance in any state

What steps will reproduce the problem?
1. use kmeans algorithm to learn initial hmm with OpdfMultiGaussianDistribution 
and two dimensional vectors
2. try to learn sequences where it somehow comes to states without variance in 
the kmeans algorithm. For example when you try to kmeans-learn an hmm with two 
states using one sequence with two observations ([ 0,136 0 ]; [ 9,595 0,5 ]; ).
Now the first observation will represent state 1 and the second observation 
will represent state 2. Both states have no variance in this case.
States without variance could also happen with more observations by random and 
this can not be predictable easily as it seems.

What is the expected output? What do you see instead?
it should also learn observations that end up in states without variance. 
States without variance produce a "Matrix is not full rank" error

Please provide any additional information below.

Used algorithms of Jahmm end up in a problem when somewhere no variance exists 
for a OpdfMultiGaussian function (mathematical problem). Zero variance is not 
illegal by itself (it can legaly occur in some cases. acutally they should only 
be zero in infinity) but used algorithms can't calculate with 0 as a variance 
value. Zero variance in one dimension would mean that this dimension kind of 
doesn't exist; it can not be calculated the distance (=probability) to the mean 
value of the gaussian curve in this dimension. Variance would not be zero (but 
probably very close as possible to zero) as soon as a little variance is 
existing (-> ">0")... Therefor we could set "d" (in SimpleMatrix.java:260) to 
"0.00000000000001" for example if it is already equal to 0 at this position. Of 
course this does mean that we manipulate the gaussian distribution, but since 
this is about statistics an probability this manipulation is VERY little. 
Depending on your system and your needs this could be a good and simple 
workaround for the "Matrix is not full rank" error.

Go to SimpleMatrix.java:260 and add:
        if(d==0){
            d = 0.0000000001;
        }

kind regards,
Ben

Original issue reported on code.google.com by [email protected] on 9 Jun 2010 at 1:13

jahmmViz won't accept any sequence

What steps will reproduce the problem?
1. download/use jahmmviz here 
http://www.run.montefiore.ulg.ac.be/~francois/software/jahmm/jahmmViz/
2. use the test sequences given in the same page
3.

What is the expected output? What do you see instead?
it is supposed to show or display the sequences and can be used for the other 
functions of the jahmmViz

What version of the product are you using? On what operating system?
I'm using jahmmviz 0.2.5 on windows 8

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 2 Mar 2014 at 1:06

Mixture of Multivariate Gaussians

I want to use the class HMM to model left-right HMMs in a speech
recognition system. 
The problem is: The state pdf used in this case is a mixture of
multivariate gaussians and, as I read on JavaDocs, the Opdf interface is
not implemented this way by anyone of the classes described. 
Have someone worked on it?

Original issue reported on code.google.com by denisealves88 on 25 May 2010 at 7:42

When observations are too many.

It would be a kind of stupid question:

 There are many observations from each latent variables.
In this case, we can integrate all the observations(each k dimension) into one 
observation variable with k^n(n observations or sensors).

When k or n goes increase large enough, bijo goes small, which means alpha and 
beta goes to infinidecimal exponetially along with timeline, which make BW 
algorithm fails to learn parameter.

Here is my quesion, do you know how to learn parameter when obeservation prob. 
is low? or too many output in observation?



Original issue reported on code.google.com by [email protected] on 11 Jul 2011 at 12:15

Variance must be positive problem

Hi! First of all, I want to thank the authors for making such a good and 
simple implementation of Hmm in Java. 

However, I'm encountering a problem. I want to make a voice recognition 
software and use HMM to classify the data.

An example of the feature vector(training data) I used is like this:

739.095423  -68.9217791 -41.94023848    190.343314  70.14624796 
19.4320868  101.1893881 60.00366733 1.083977616 31.19825832 
44.07300049 19.78601438

To learn the HMMs, my code is:

public Hmm<ObservationReal> learnBleh(List<List<ObservationReal>> 
sequences){
        int numberOfHiddenStates = 12;
        do{

            KMeansLearner<ObservationReal> kml = new 
KMeansLearner<ObservationReal>(numberOfHiddenStates, new 
OpdfGaussianFactory(), sequences);

            trainedHmm = kml.learn();
            BaumWelchLearner bwl = new BaumWelchLearner();
            bwl.setNbIterations(20);
            trainedHmm = bwl.learn(trainedHmm, sequences);
            numberOfHiddenStates++;
        }while(Double.isNaN(trainedHmm.getPi(0)) && numberOfHiddenStates 
<50);

        return trainedHmm;
    }

How come I get an error:
Exception in thread "main" java.lang.IllegalArgumentException: Variance 
must be positive
        at 
be.ac.ulg.montefiore.run.distributions.GaussianDistribution.<init>(Gaussian
Distribution.java:43)
        at 
be.ac.ulg.montefiore.run.jahmm.OpdfGaussian.fit(OpdfGaussian.java:124)
        at 
be.ac.ulg.montefiore.run.jahmm.OpdfGaussian.fit(OpdfGaussian.java:93)
        at 
be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.learnOpdf(KMeansLearner.
java:165)
        at 
be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.iterate(KMeansLearner.ja
va:67)
        at 
be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.learn(KMeansLearner.java
:96)

I don't know what the error means. Can somebody help me? Thanks.
All help is greatly appreciated!



Original issue reported on code.google.com by [email protected] on 13 Jan 2010 at 5:52

Hmm<ObservationVector> Implementation

Hi,
  I need an example implementation of a Hmm<ObservationVector>,  especially the usage of setOpdf() of Hmm<ObservationVector> with specified vector inputs.

Best regards,
Arthur

Original issue reported on code.google.com by [email protected] on 17 Jul 2012 at 10:45

Training about HMM with more than one observations

In wiki, it was wrote:
    * Hidden Markov Model learning algorithms
    * The learning is based on a set of observation sequences (not only one).
        # K-Means learning;
        # Baum-Welch learning with scaling (meaning that a learning based on long observations sequences don't generate underflows). 

Can you give some tips about how use those two methods(K-Means learning and 
Baum-Welch learning with scaling )to learn parameters in hmm; or you can give 
me some meterials about it.

Thanks.

Original issue reported on code.google.com by [email protected] on 6 Sep 2011 at 7:15

Upload to Maven Central / Jcenter

Hello,

Can you please upload this library to Maven Central / Jcenter ? 

I already put it myself in JCenter but you should be the owner of that 
repository on JCenter, not someone external to your project.

Here's my uploaded version: 
https://bintray.com/galex/maven/be.ac.ulg.montefiore.run.jahmm/view

Regards,
Alex

Original issue reported on code.google.com by [email protected] on 27 Jan 2015 at 12:33

Reader and Writer for ObservationDiscrete and OpdfDiscrete

Problem:

There are no Readers and Writers for ObservationDiscrete and OpdfDiscrete. Also 
the CLI application does not support these types.


Solution:

I've implemented the missing classed and modified the CLI application. A patch 
is included, here is an overview of the changes in this patch:

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/ObservationDiscreteReader.java
ObservationReader for ObservationDiscrete<e>

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/ObservationDiscreteWriter.java
ObservationWriter for ObservationDiscrete<E>

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/ObservationSequencesReader.java
Added wordChars to the SyntaxTable for the recognition of java identifiers

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/OpdfDiscreteReader.java
OpdfReader for OpdfDiscrete<E>

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/OpdfDiscreteWriter.java
OpdfWriter for OpdfDiscrete<E>

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/OpdfGenericReader.java
Parameterization for OpdfReader r

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/OpdfIntegerWriter.java
Use of OpdfWriter.write(Writer, double[]) in OpdfIntegerWriter.write(Writer, 
OpdfInterger) to get formatted probabilities

src/main/java/be/ac/ulg/montefiore/run/jahmm/io/OpdfWriter.java
Extra decimal in the formatter to get the same number of decimals as the 
formatter in the HmmWriter

src/main/java/be/ac/ulg/montefiore/run/jahmm/OpdfDiscrete.java
Parameterization for ObservationDiscrete

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/CommandLineArguments.java
Option 'discrete' added to -opdf and argument VALUES_CLASS (-c) added.

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/RelatedObjs.java
Parameterization CentroidFactory removed from RelatedObjs and moved to 
subinterface RelatedCentroidObjs

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/RelatedCentroidObjs.java
New subinterface of RelatedObjs for Observation types that implement 
CentroidFactory

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/Types.java
New function relatedCentroidObjs in Types class
New class DiscreteRelatedObjects
Implemented interface changed to RelatedCentroidObjs for existing classes


src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/BWActionHandler.java
src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/CreateActionHandler.java
src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/GenerateActionHandler.java
src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/KLActionHandler.java
Parameterization CentroidFactory removed

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/KMeansActionHandler.java
Changed to relatedCentroidObjs()

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/HelpActionHandler.java
Help for new -opdf option and -c argument

src/main/java/be/ac/ulg/montefiore/run/jahmm/apps/cli/PrintActionHandler.java
Modified opdfReader code, require -opdf argument for DiscreteODF input files


Original issue reported on code.google.com by [email protected] on 27 Oct 2011 at 11:30

Attachments:

Using jahmm for classification

Hi all,
I'm using jahmm6.1 in NetBeans IDE 6.9.1 for classification. since I'm new in 
this area I need some hints to use this useful package. I have a dataset which 
is an excel file consisting of 1000 rows that each row has 15 columns(float 
values).I should classify these data into two classes.
I wrote a code to read data from excel file and then try to learn 
KMeansLearner, but I don't know how should I determine the class of each data 
point! However, I know whether each row belings to class1 or 2, I don't know 
how should I learn hmm based on it!
Please let me know your useful hints. It's urgent for me.
This is part of code that I wrote. 

Vector sequences = new Vector();         
           for (int i = 0; i < 20; i++) {               
                 for (int j=0; j<15; j++) 
                     vec[j]=myArray[i][j];                  
               ObservationVector obs=new ObservationVector(vec);
                sequences.add(obs);
            }
KMeansLearner<ObservationVector> kml = new KMeansLearner<ObservationVector>(2, 
new OpdfMultiGaussianFactory(1000) , sequences);
            hmm = kml.learn(); 
BaumWelchLearner  bwl = new BaumWelchLearner ();
       hmm= bwl.learn(hmm, sequences);
for(int i = 0; i < 20; i++) {
        hmm = bwl.iterate(hmm, sequences);
    }

It gave me an error like this:
Exception in thread "main" java.lang.ClassCastException: 
be.ac.ulg.montefiore.run.jahmm.ObservationVector cannot be cast to 
java.util.List
        at be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.flat(KMeansLearner.java:198)
        at be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.<init>(KMeansLearner.java:48)


thanks a lot. 

Original issue reported on code.google.com by [email protected] on 20 Dec 2010 at 4:22

ArrayIndexOutOfBoundsException in KMeansLearner.learn method

I'm newbie with Jahmm and I've tried to implement a HMM that recieve a list of 
sequences corresponding to a trajectory. Each value are compound for 4 
variables: x, y, and dx,xy (velocity vector).

The output states are 2: Right and Left.

I have an ArrayIndexOutOfBoundsException exception in the learn method of 
KMeansLearner:


Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 330
    at be.ac.ulg.montefiore.run.jahmm.OpdfInteger.fit(OpdfInteger.java:116)
    at be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.learnOpdf(KMeansLearner.java:165)
    at be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.iterate(KMeansLearner.java:67)
    at be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner.learn(KMeansLearner.java:96)
    at Detector.main(Detector.java:97)

This is the code:

int inputStates = 4;
        KMeansLearner kmeansRight = new KMeansLearner(inputStates, new OpdfIntegerFactory(inputStates), obsSequenceRight);
        Hmm derHMM = kmeansRight.learn();

        KMeansLearner kmeansLeft = new KMeansLearner(inputStates, new OpdfIntegerFactory(inputStates), obsSequenceLeft);
        Hmm izqHMM = kmeansLeft.learn();

        List<List<Integer>> lsTest = detector.leerDirectorio(filename);
        for (int i = 0; i < lsTest.size(); i++) {
            double derProb = derHMM.probability(lsTest);
            double izqProb = izqHMM.probability(lsTest);
        }

Thanks for the help.

Original issue reported on code.google.com by [email protected] on 11 Jan 2015 at 4:34

Exception "Matrix not positive defined" when calculating probability of a sequence of observations

Hi! First I want to thanks the authors for their grat job.

Now my problem. I´m using a Hmm<ObservationVector> object and therefore
every state has a OpdfMultiGaussian?. When I try to calculate the
probability of a sequence of observation with this hmm (probability()
function) I got the next exception:

Exception in thread "main" java.lang.IllegalArgumentException?: Matrix is
not positive defined

    at
be.ac.ulg.montefiore.run.distributions.SimpleMatrix?.decomposeCholesky(SimpleMat
rix?.java:241)
at
be.ac.ulg.montefiore.run.distributions.MultiGaussianDistribution?.covarianceL(Mu
ltiGaussianDistribution?.java:101)
at
be.ac.ulg.montefiore.run.distributions.MultiGaussianDistribution?.covarianceInv(
MultiGaussianDistribution?.java:112)
at
be.ac.ulg.montefiore.run.distributions.MultiGaussianDistribution?.probability(Mu
ltiGaussianDistribution?.java:159)
at
be.ac.ulg.montefiore.run.jahmm.OpdfMultiGaussian?.probability(OpdfMultiGaussian?
.java:94)
at
be.ac.ulg.montefiore.run.jahmm.OpdfMultiGaussian?.probability(OpdfMultiGaussian?
.java:1)
at
be.ac.ulg.montefiore.run.jahmm.ForwardBackwardCalculator?.computeAlphaInit(Forwa
rdBackwardCalculator?.java:111)
at
be.ac.ulg.montefiore.run.jahmm.ForwardBackwardCalculator?.computeAlpha(ForwardBa
ckwardCalculator?.java:92)
at
be.ac.ulg.montefiore.run.jahmm.ForwardBackwardCalculator?.<init>(ForwardBackward
Calculator?.java:63)
at
be.ac.ulg.montefiore.run.jahmm.ForwardBackwardCalculator?.<init>(ForwardBackward
Calculator?.java:81)
at be.ac.ulg.montefiore.run.jahmm.Hmm.probability(Hmm.java:248) at
hmmcrossings.testMultiGaussianHmm.main(testMultiGaussianHmm.java:75) 

Following I copy the hmm:

Hmm v1.0

NbStates 2

State
Pi 1
A 0.5 0.5 
MultiGaussianOPDF [ [ 4,972 0 ] [ [ 36.636 -0 ] [ -0 0 ] ] ]

State
Pi 0
A 0 0 
MultiGaussianOPDF [ [ 4,987 147 ] [ [ 6.423 93.219 ] [ 93.219 56.051 ] ] ]

I´ve tried sequences of observations with just one element and with hundred
of them. I guess the problem is in the covariances matrix but I ´m not sure.

I´m working on Windows

What I´m doing wrong?? Any solution or advice???

Thanks you for your help!!!


Original issue reported on code.google.com by [email protected] on 12 Aug 2009 at 9:45

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.