Giter Site home page Giter Site logo

kalmannet_tsp's People

Contributors

kalmannet avatar xiaoyongni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kalmannet_tsp's Issues

A stupid question

Hi, I try to understand how to design/determine a Q_gen matrix but sorry that I cannot get it, in the example of constant acceleration (main_linear_CA).

Q_gen = q2 * torch.tensor([[1/20delta_t_gen**5, 1/8delta_t_gen4,1/6*delta_t_gen3],
[ 1/8delta_t_gen**4, 1/3delta_t_gen3,1/2*delta_t_gen2],
[ 1/6delta_t_gen**3, 1/2delta_t_gen**2, delta_t_gen]]).float()

Each iteration does not update the information of the previous moment

for j in range(0, self.N_B):
n_e = random.randint(0, self.N_E - 1)
y_training = train_input[n_e, :, :]
self.model.InitSequence(self.ssModel.m1x_0)

As mentioned above, each cycle will do a InitSequence, so has the information of the previous moment retained?
The paper shows that at the i-th iteration, xt is put into the network. Following the above code, the initial unit vector should be put into the network.So could you please answer this question? I've been bothered by this for a long time.

A stupid question of CV_test

apologize for asking, the CV_train use H=dim(6,4) ,torch.Size([1200, 6, 4]))but the test use H=dim(1,2),torch.Size([600, 1, 2]),the settings are modified from linear_CA code,but there is no H_settings I can find,where can I find the test of H setting?

The difference between KF (model fit) and KF (model mismatch)

Dear author, hello. I have two questions that I would like to ask you. The first question: When I run the main_linear_canical.py file in the main code package, I get that the difference between KF (model fit) and KF (model mismatch) is more than 3 [dB], which should be greater than 20 [dB]. But when I set F as the identity matrix, I got the same 3 [dB] as in the paper. Why is this? The second question: When I ran the main_linear.py file in the architecture - # 1 code package, I found that the difference between KF (model fit) and KF (model mismatch) was more than 3 [dB], which should be around 24 [dB]. Why is this?

微信图片_20240508110657

Train netwrok with different values for m and n

With kalman filter (from filterpy for example) it's possible to have different dimension for the state and the observation, so for example you can have a problem were you have your state with dimention 4 but observation with dimension 7. Is it possible to do the same with KalmanNet?

Help needed: strange results with Google Colab

Hi!
In our research project we started to work with the KalmanNet. As a first step, we only tried the simple linear case using the 2x2 linear system (2x2_020_T100.pt). For this, we imported the following files to Google Colab and ran a learning and testing sequence:

  • KalmanNet_nn.py

  • Linear_KF.py

  • Linear_sysmdl.py

  • Extended_data.py

  • Pipeline_KF.py

  • KalmanFilter_test.py

  • main_linear.py

  • Plot.py

We found the following behaviour: using the same code as in the GitHub repository and the same files, in some cases the results look rather promising:

kép

kép

But in other cases – without any modifications applied – the results look like this:

kép

kép

The strange thing is that the system, the data, the hyperpaparmeters and the length of the learning sequence are the same, however the results seem to converge to different values: sometimes to -7db/-8dB and sometimes to 0dB.
Unfortunately we haven’t come up with an explanation so far, so we would be much obliged if you could give us a helping hand what might went wrong in our side.
Thanks in advance!

Possible connection to recent Neural Kalman Filtering paper

Following on the comment in the arxiv paper about "KG can be related to the covariance of the estimate," it might be possible to connect this to the recent Neural Kalman Filtering work by @BerenMillidge (paper, dissertation). Also the comment in the paper about its use for deep symbol detection, this connection to Active Inference/variational free energy might support linkages to other recent work from Cohen et al. and ultimately support a divergence-based uncertainty metric. Nice work!

The process of NCLT Dataset is important.

Dear Authors:
The NCLT Dataset used in KalmanNet is processed one. The odometry data in the official NCLT is without velocity, while the used data have. So how to generate the velocity value is a critical question for the experiment. Can you provide more information about the process? It will be nice to see the code.

Sincerely,
Taylor Zhou

errors on running stock code (error on using GPU and CPU)

Hi, your research is very interesting. I am trying to reproduce your results but I am having errors specifically when running the UKF evaluation which also occurs when I'm running PF test. My first error I encountered is this.

messageImage_1649141501214

When I tried to edit the files on the UKF_test.py, adding .cpu, I now get this error.

messageImage_1649141659815

I then tried to change the codes to only run torch on CPU but it is still not working.

I am using python 3.10, CUDA 11.3 for pytorch and no specific version for
matplotlib, seaborn, filterpy, and pyparticleest which I installed when needed during import.
I hope I can get some help. Thank you!

SectionIV.B "here F takes the controllable canonical form." "H to take the inverse canonical form"

#################
## Design #10 ###
#################
F10 = torch.tensor([[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]])
H10 = torch.tensor([[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
############
## 2 x 2 ###
############
m = 2
n = 2
F = F10[0:m, 0:m]
H = torch.eye(2)
m1_0 = torch.tensor([[0.0], [0.0]])
# m1x_0_design = torch.tensor([[10.0], [-10.0]])
m2_0 = 0 * 0 * torch.eye(m)

Dear authors,
Thank you for your excellent work, I have some questions as follows:

  1. In your paper Section IV.B, it is mentioned that "here F takes the controllable canonical form." and "we set H to take the inverse canonical form". But in above codes, the F does not appear to be a controllable canonical form, and I also want to know what "inverse canonical form" means.
  2. In Section IV.B 2), "Neural Model Selection", you compared KalmanNet with Vanilla RNN and MB RNN, I couldn't find related codes, I want to know if Vanilla RNN and MB RNN have same setting C1 as KalmanNet, i.e., architecture1 with input features {F2, F4} and with training algorithm V3.

Looking forward to your reply,
Thank you.

code error

hello
problem 1:When I use the project code you designed, there is an error in importing the parameters, model, filterpy.kalman, and pyparticleest packages, but these files are in the project folder.
problem 2: Can you elaborate more on how to use the entire project

Given the wrong initial value, the KNet (architecture 1) does not converge

Q1: Do I have to give an exact initial value of "m1x_0"? When I try to have the initial value with error, KNet does not converge.
Q2: My state vector is 9-dimensional (x:[9x1]), and the order of magnitude difference between dimensions is large, do I need to change other data normalization methods?The nn.functional.normalize function is used in your source program.

Some Small Errors i detected

Greetings dear authors ,

Thank you very much for uploading your really interesting work !
While i was working with the code i noticed some small errors and thought i thought about proposing some corrections .

  1. In file Pipeline_EKF.py :

Line 10

from Plot import Plot_KF

should be changed to

from Plot import Plot_RTS or from Plot import Plot_extended

so that line 348 is functional after changing line 343 to

self.Plot =Plot_RTS(self.folderName, self.modelName) or self.Plot =Plot_extended(self.folderName, self.modelName) .

  1. Same problem with file main_linear_CA.py :

Line 17

from Plot import Plot_KF as Plot

should be changed to

from Plot import Plot_RTS as Plot or from Plot import Plot_extended

so that lines 182 -184 are functional .

  1. In file KalmanNet_nn.py :

Lines 220 and 228

in_FC5 = fw_evol_diff

in_FC6 = fw_update_diff

should be changed as :

in_FC5 = fw_update_diff

in_FC6 = fw_evol_diff

so to follow the illustration in figure 4 of the KalmanNet paper (https://arxiv.org/pdf/2107.10043.pdf) at page 6 . The forward update difference is used on the estimation of covariance Q instead the of the forward innovation difference feature . So , the estimation of covariance Σ should be changed accordingly .

Best regards ,
Odysseas Karachalios

Requirements file

Hello,
can you please add a requirements file in order to know which python libraries one has to install in order to run your code correctly?
Thanks

Dimension M has to be greater than dimension N?

I now this is kinda dumb question but i have noticed that when i am running the KalmanNet code if dimension M is less than N i am facing many errors but if dimension M is greater or equal to N there is no problem. Also the example in the KalmanNet is satisfy these conditions and some of the example on github.

Thx in advance

Training doesn't seem to be working

Hi, your research is very interesting. I am trying to reproduce your results but I'm having trouble when running the "main_linear.py"(main branche). Training doesn't seem to be working, the training logs as follows.

'''''''''
KNet with full model info
0 MSE Training : tensor(8.5938) [dB] MSE Validation : tensor(8.7018) [dB]
Optimal idx: 0 Optimal : tensor(8.7018) [dB]
1 MSE Training : tensor(8.6698) [dB] MSE Validation : tensor(8.7014) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
2 MSE Training : tensor(8.8005) [dB] M_SE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.1307) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
3 MSE Training : tensor(8.4263) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.3742) [dB] diff MSE Validation : tensor(-6.4850e-05) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
4 MSE Training : tensor(8.5615) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.1352) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
5 MSE Training : tensor(8.7999) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.2385) [dB] diff MSE Validation : tensor(1.5259e-05) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
6 MSE Training : tensor(8.9301) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.1302) [dB] diff MSE Validation : tensor(-0.0001) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
7 MSE Training : tensor(8.7870) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(-0.1432) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
8 MSE Training : tensor(8.3628) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.4241) [dB] diff MSE Validation : tensor(-0.0008) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
9 MSE Training : tensor(9.0843) [dB] MSE Validation : tensor(8.7023) [dB]
diff MSE Training : tensor(0.7215) [dB] diff MSE Validation : tensor(0.0008) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
10 MSE Training : tensor(8.5039) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.5804) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
11 MSE Training : tensor(8.2991) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.2047) [dB] diff MSE Validation : tensor(3.2425e-05) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
12 MSE Training : tensor(9.0569) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.7578) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
13 MSE Training : tensor(9.0993) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.0424) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 1 Optimal : tensor(8.7014) [dB]
14 MSE Training : tensor(8.1942) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.9051) [dB] diff MSE Validation : tensor(-9.7275e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
15 MSE Training : tensor(8.9801) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.7859) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
16 MSE Training : tensor(8.9152) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(-0.0649) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
17 MSE Training : tensor(8.9287) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.0135) [dB] diff MSE Validation : tensor(-0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
18 MSE Training : tensor(8.5965) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.3322) [dB] diff MSE Validation : tensor(7.4387e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
19 MSE Training : tensor(8.6763) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.0798) [dB] diff MSE Validation : tensor(7.0572e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
20 MSE Training : tensor(8.8560) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.1796) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
21 MSE Training : tensor(8.5505) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.3055) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
22 MSE Training : tensor(8.8826) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.3321) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
23 MSE Training : tensor(8.6941) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(-0.1885) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
24 MSE Training : tensor(8.2583) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.4358) [dB] diff MSE Validation : tensor(-0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
25 MSE Training : tensor(8.5987) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.3404) [dB] diff MSE Validation : tensor(0.0006) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
26 MSE Training : tensor(8.7962) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.1975) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
27 MSE Training : tensor(8.6065) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.1897) [dB] diff MSE Validation : tensor(9.8228e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
28 MSE Training : tensor(8.8624) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.2559) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
29 MSE Training : tensor(8.7692) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(-0.0932) [dB] diff MSE Validation : tensor(0.0006) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
30 MSE Training : tensor(8.6786) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.0906) [dB] diff MSE Validation : tensor(-0.0006) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
31 MSE Training : tensor(8.9726) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.2940) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
32 MSE Training : tensor(8.4164) [dB] MSE Validation : tensor(8.7020) [dB]
diff MSE Training : tensor(-0.5562) [dB] diff MSE Validation : tensor(0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
33 MSE Training : tensor(8.7939) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.3775) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
34 MSE Training : tensor(8.5230) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(-0.2710) [dB] diff MSE Validation : tensor(-9.1553e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
35 MSE Training : tensor(9.1630) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.6400) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
36 MSE Training : tensor(8.8023) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(-0.3607) [dB] diff MSE Validation : tensor(0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
37 MSE Training : tensor(8.6611) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.1412) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
38 MSE Training : tensor(8.3911) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.2700) [dB] diff MSE Validation : tensor(-7.1526e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
39 MSE Training : tensor(9.0508) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.6597) [dB] diff MSE Validation : tensor(0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
40 MSE Training : tensor(8.9917) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.0591) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
41 MSE Training : tensor(9.0074) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.0157) [dB] diff MSE Validation : tensor(-2.0981e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
42 MSE Training : tensor(8.8018) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.2056) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
43 MSE Training : tensor(8.7068) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.0950) [dB] diff MSE Validation : tensor(7.1526e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
44 MSE Training : tensor(8.5393) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(-0.1675) [dB] diff MSE Validation : tensor(0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
45 MSE Training : tensor(9.2225) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.6832) [dB] diff MSE Validation : tensor(-8.7738e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
46 MSE Training : tensor(8.6009) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.6216) [dB] diff MSE Validation : tensor(-0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
47 MSE Training : tensor(8.6100) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.0091) [dB] diff MSE Validation : tensor(0.0008) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
48 MSE Training : tensor(8.3894) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.2207) [dB] diff MSE Validation : tensor(-0.0008) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
49 MSE Training : tensor(8.6173) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.2280) [dB] diff MSE Validation : tensor(4.7684e-06) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
50 MSE Training : tensor(8.7883) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.1710) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
51 MSE Training : tensor(9.0015) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.2132) [dB] diff MSE Validation : tensor(-8.4877e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
52 MSE Training : tensor(8.4897) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.5118) [dB] diff MSE Validation : tensor(-8.4877e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
53 MSE Training : tensor(8.2140) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.2756) [dB] diff MSE Validation : tensor(-5.0545e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
54 MSE Training : tensor(8.7295) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.5155) [dB] diff MSE Validation : tensor(0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
55 MSE Training : tensor(9.0426) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.3131) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
56 MSE Training : tensor(8.8957) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.1469) [dB] diff MSE Validation : tensor(-0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
57 MSE Training : tensor(8.8150) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.0806) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
58 MSE Training : tensor(8.5726) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(-0.2424) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
59 MSE Training : tensor(9.1209) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.5483) [dB] diff MSE Validation : tensor(-0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
60 MSE Training : tensor(9.1288) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.0079) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
61 MSE Training : tensor(8.8125) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.3162) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
62 MSE Training : tensor(9.1597) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.3472) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
63 MSE Training : tensor(8.8877) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(-0.2721) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
64 MSE Training : tensor(8.8358) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.0519) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
65 MSE Training : tensor(8.5624) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.2734) [dB] diff MSE Validation : tensor(-1.2398e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
66 MSE Training : tensor(9.0305) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.4681) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
67 MSE Training : tensor(9.3266) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(0.2961) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
68 MSE Training : tensor(8.8236) [dB] MSE Validation : tensor(8.7020) [dB]
diff MSE Training : tensor(-0.5030) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
69 MSE Training : tensor(8.7079) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(-0.1156) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
70 MSE Training : tensor(9.1105) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.4026) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
71 MSE Training : tensor(8.8416) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.2689) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
72 MSE Training : tensor(8.8848) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(0.0432) [dB] diff MSE Validation : tensor(0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
73 MSE Training : tensor(8.1018) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.7830) [dB] diff MSE Validation : tensor(-0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
74 MSE Training : tensor(8.6564) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.5546) [dB] diff MSE Validation : tensor(-3.3379e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
75 MSE Training : tensor(8.6623) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.0059) [dB] diff MSE Validation : tensor(-5.9128e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
76 MSE Training : tensor(8.7248) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.0625) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
77 MSE Training : tensor(8.6769) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.0479) [dB] diff MSE Validation : tensor(-0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
78 MSE Training : tensor(8.9235) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(0.2466) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
79 MSE Training : tensor(8.3508) [dB] MSE Validation : tensor(8.7018) [dB]
diff MSE Training : tensor(-0.5728) [dB] diff MSE Validation : tensor(-7.2479e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
80 MSE Training : tensor(8.8629) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.5121) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
81 MSE Training : tensor(8.7090) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.1539) [dB] diff MSE Validation : tensor(-0.0006) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
82 MSE Training : tensor(8.8191) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.1101) [dB] diff MSE Validation : tensor(2.5749e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
83 MSE Training : tensor(8.8787) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.0596) [dB] diff MSE Validation : tensor(4.1962e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
84 MSE Training : tensor(8.9233) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.0446) [dB] diff MSE Validation : tensor(9.5367e-07) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
85 MSE Training : tensor(9.3883) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.4650) [dB] diff MSE Validation : tensor(-6.1035e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
86 MSE Training : tensor(8.7402) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.6481) [dB] diff MSE Validation : tensor(-2.8610e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
87 MSE Training : tensor(8.8068) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.0666) [dB] diff MSE Validation : tensor(-9.7275e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
88 MSE Training : tensor(8.2154) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.5914) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
89 MSE Training : tensor(8.5972) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.3818) [dB] diff MSE Validation : tensor(-9.1553e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
90 MSE Training : tensor(8.9435) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.3463) [dB] diff MSE Validation : tensor(-4.4823e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
91 MSE Training : tensor(8.9302) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(-0.0134) [dB] diff MSE Validation : tensor(0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
92 MSE Training : tensor(8.3266) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.6036) [dB] diff MSE Validation : tensor(-0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
93 MSE Training : tensor(8.8314) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.5048) [dB] diff MSE Validation : tensor(8.6784e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
94 MSE Training : tensor(8.5485) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(-0.2828) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
95 MSE Training : tensor(8.4559) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.0926) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
96 MSE Training : tensor(8.8558) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.3999) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
97 MSE Training : tensor(8.5810) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.2748) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
98 MSE Training : tensor(8.6104) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.0294) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
99 MSE Training : tensor(9.3529) [dB] MSE Validation : tensor(8.7020) [dB]
diff MSE Training : tensor(0.7425) [dB] diff MSE Validation : tensor(-0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
100 MSE Training : tensor(8.6106) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.7423) [dB] diff MSE Validation : tensor(-0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
101 MSE Training : tensor(8.9412) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.3306) [dB] diff MSE Validation : tensor(-0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
102 MSE Training : tensor(8.5660) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.3752) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
103 MSE Training : tensor(8.6016) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.0356) [dB] diff MSE Validation : tensor(-0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
104 MSE Training : tensor(8.3595) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.2421) [dB] diff MSE Validation : tensor(-3.2425e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
105 MSE Training : tensor(9.0348) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(0.6753) [dB] diff MSE Validation : tensor(2.4796e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
106 MSE Training : tensor(8.8043) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.2305) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
107 MSE Training : tensor(8.5084) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(-0.2959) [dB] diff MSE Validation : tensor(-1.7166e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
108 MSE Training : tensor(8.3825) [dB] MSE Validation : tensor(8.7020) [dB]
diff MSE Training : tensor(-0.1259) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
109 MSE Training : tensor(9.1741) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(0.7916) [dB] diff MSE Validation : tensor(-8.9645e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
110 MSE Training : tensor(8.9037) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(-0.2705) [dB] diff MSE Validation : tensor(0.0003) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
111 MSE Training : tensor(8.6086) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(-0.2951) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
112 MSE Training : tensor(8.8115) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(0.2030) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
113 MSE Training : tensor(8.6630) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(-0.1485) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
114 MSE Training : tensor(8.8652) [dB] MSE Validation : tensor(8.7016) [dB]
diff MSE Training : tensor(0.2022) [dB] diff MSE Validation : tensor(-7.8201e-05) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
115 MSE Training : tensor(8.9704) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.1052) [dB] diff MSE Validation : tensor(0.0006) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
116 MSE Training : tensor(9.0332) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.0628) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
117 MSE Training : tensor(9.1208) [dB] MSE Validation : tensor(8.7021) [dB]
diff MSE Training : tensor(0.0876) [dB] diff MSE Validation : tensor(0.0004) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
118 MSE Training : tensor(8.5910) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(-0.5298) [dB] diff MSE Validation : tensor(0.0001) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
119 MSE Training : tensor(8.9616) [dB] MSE Validation : tensor(8.7017) [dB]
diff MSE Training : tensor(0.3706) [dB] diff MSE Validation : tensor(-0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
120 MSE Training : tensor(8.9941) [dB] MSE Validation : tensor(8.7015) [dB]
diff MSE Training : tensor(0.0325) [dB] diff MSE Validation : tensor(-0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
121 MSE Training : tensor(9.0194) [dB] MSE Validation : tensor(8.7020) [dB]
diff MSE Training : tensor(0.0254) [dB] diff MSE Validation : tensor(0.0005) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
122 MSE Training : tensor(9.3038) [dB] MSE Validation : tensor(8.7022) [dB]
diff MSE Training : tensor(0.2844) [dB] diff MSE Validation : tensor(0.0002) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
123 MSE Training : tensor(8.6813) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.6225) [dB] diff MSE Validation : tensor(-0.0007) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
124 MSE Training : tensor(8.6003) [dB] MSE Validation : tensor(8.7014) [dB]
diff MSE Training : tensor(-0.0810) [dB] diff MSE Validation : tensor(-4.7684e-06) [dB]
Optimal idx: 14 Optimal : tensor(8.7014) [dB]
125 MSE Training : tensor(8.6874) [dB] MSE Validation : tensor(8.7019) [dB]
diff MSE Training : tensor(0.0871) [dB] diff MSE Validation : tensor(0.0005) [dB]
''''''

I just change the codes as follows.using KNet_Pipeline.model = torch.load(modelFolder+"model_KalmanNet.pt"), it is on line 107 of main_linear.py

捕获

I hope I can get some help. Thank you!

Ask for the the processed NCLT dataset

Hi, Thanks for you interesting work. I noticed that the there are not data files in the NCLT experiment. Can you upload the processed dataset used in your paper? I want to repreduce the results in your paper.

Will you add a descriptive file readme?

I think if there is some introduction to the content of the py file, or how to run the code, it may be more user-friendly.

Anyway, thank you very much for your nice work!

A problem in lor_DT_NLobs of master

hi, thanks for your prominent work and I just start to learn about it so my question may be quite simple.
When I download the code, set use_cuda=false because of the limitation of my computer, and then run the script "python3 main_lor_DT_NLobs.py", I get an error and don't exactly know why?
It seem like h_nonlinear() has no input parameters 'jacobian', but the output of h_nonlinear() are also mismatched with the output with two items. Could you please help me?
image

Number of Steps for large m and n dimensions

Dear authors,

Thank you for your work. I was wondering, according to your experience, what would be the minimum number of epochs/training steps for a state vector with dimension n = 10 and a measurement vector with dimension m = 6 ?

Looking forward for your answer.

Memory Usage Issue

The generation of the dataset is set by you with m and n, but when I used KalmanNet on my own dataset (m=16384, n=20480), I encountered a GPU memory shortage proble,so I modified the net part, converting fc+gru+fc to some other blocks, which can only be used for training t=1 and batchsize = 1 data. So I have a question, if t=1, then the dataset is static data, can KalmanNet still use xt|t for feedback? If not, what should I do?

Another issue is that for static data(t=1), the calculated xt|t is not used to update the state, so each time it enters the net, it is updated using self.model.InitSequence(self.ssModel.m1x_0), which results in the output of my net always being affected by the initialization of m1x_0. My current approach is

if ti % 18 == 0: 
       self.model.InitSequence(self.ssModel.m1x_0) 
else: 
       self.model.InitSequence(resx.detach())

where resx = self.model(y_training[:, t])
However, the MSE cannot decrease with this approach. Do you know the reason for this?

Run on Jupyter notebook

Hello,

I am trying to run "main_linear_canonical.py" on Jupyter Notebook.
So I converted "main_linear_canonical.py" to "main_linear_canonical.ipynb", and run it on Jupyter notebook.
However, I got this error message below:

usage: KalmanNet [-h] [--N_E trainset-size] [--N_CV cvset-size] [--N_T testset-size] [--T length]
[--T_test test-length] [--randomLength rl] [--T_max maximum-length] [--T_min minimum-length]
[--randomInit_train ri_train] [--randomInit_cv ri_cv] [--randomInit_test ri_test]
[--variance variance] [--distribution distribution] [--use_cuda CUDA] [--n_steps N_steps]
[--n_batch N_B] [--lr LR] [--wd WD] [--CompositionLoss loss] [--alpha alpha]
[--in_mult_KNet in_mult_KNet] [--out_mult_KNet out_mult_KNet]
KalmanNet: error: unrecognized arguments: -f C:\Users\ko.444\AppData\Roaming\jupyter\runtime\kernel-f784040b-73c0-494f-936f-db7a81c2959c.json
An exception has occurred, use %tb to see the full traceback.

SystemExit: 2
C:\Users\ko.444\AppData\Local\anaconda3\envs\kalmannet\Lib\site-packages\IPython\core\interactiveshell.py:3561: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)

I believe the error results from line "args = config.general_settings()"

What happened here and I was wondering how I can resolve this issue.

Thank you in advance.

Issue Report

I find some problems in the papers and the code.

Problem 1. In the paper's formation (16) which writes F_{alpha}=R*F_0, but in the code Simulations/Linear_canonical/parameters.py which writes F_rotated = torch.mm(F,rotate_matrix).

Problem 2. I run the code which assumes F_rotated = torch.mm(F,rotate_matrix) is right and α=10°, 2x2, 1/r^2=0, the result shows 1.2492 dB, -2.2682 dB respectively. However, in the figure 6(a), the KF results shows 8 dB, 11 dB respectively. So i suspect that the legend 1 to legend 3 in the figure 6(a) should add "MSE+10" like the figure 5.

Training based on custom dataset

I have two questions for training based on custom dataset.

  1. How deal with the missed part of the observation data?
  2. If trajectory divided because of missed observation, Is it necessary to connect the fragment(trajectories) of all trajectory together?

Run Training and Inference based on custom dataset

First of all, thank you very much for your nice work!

I am trying to construct an application based on your algorithm. I want to nonparametrically run an EKF based on your hybrid system.
I have the assumption of the model-based case, but i haven't yet understand the dataset generation.

How can we run training and validation with a custom dataset which we have generate. And also what is the .pt file format? Do you produce it with a specific way?

Input feature size meanning

Hi,

Can you please give the detail of model's input + output size? When I debug it shows that

train_input.shape = size(1000, 2, 100)
train_target.shape = size(1000, 2, 100)
cv_input.shape = size(1000, 2, 100)
cv_target.shape = size(1000, 2, 100)
test_input.shape = size(1000, 2, 100)
test_target.shape = size(1000, 2, 100)

device problem

image
I encountered some difficulties when running main_linear.py. I tried changing some of the parameters (self.Q, self.R, and y) in Linear_KF.py to cpu, and the code ran successfully. However, I'm not sure if this is the correct approach.
It is the result.
image
Another issue is that when I train using CUDA on my 3060, it takes longer than training on my 5600G CPU. Why is this?
image

Question on Updating self.h_Sigma in KalmanNet Code

Dear author,

I hope this message finds you well. I am writing to inquire about a specific implementation detail in the KGain_step method of the KalmanNet code, particularly concerning the update of self.h_Sigma.

In the KGain_step method, the hidden state self.h_Sigma is updated using the GRU layer self.GRU_Sigma. Here is the relevant code snippet:
out_Sigma, self.h_Sigma = self.GRU_Sigma(in_Sigma, self.h_Sigma)
This line correctly updates self.h_Sigma through the GRU layer. However, later in the code, self.h_Sigma is updated again with the output from a fully connected (FC) layer:
# Updating hidden state of the Sigma-GRU self.h_Sigma = out_FC4
My understanding is that the hidden state should only be updated by the GRU layer to maintain the integrity of the sequence information. Updating it again using the output from the FC layer could potentially lead to inconsistencies or unintended behavior.

I would like to understand if there is a specific reason or purpose behind this additional update of self.h_Sigma with out_FC4. Is there a particular design consideration or benefit that this implementation aims to achieve?

Understanding the rationale behind this design choice would be highly beneficial for me. I appreciate your guidance on this matter and look forward to your insights.

Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.