Giter Site home page Giter Site logo

vus's People

Contributors

bogireddytejareddy avatar boniolp avatar johnpaparrizos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

vus's Issues

Recall, Precision, F1-Score are confusing

I tried to check, if I used your algorithm correctly and if the data makes sense.
So I decided to compare your Recall, Precision and F1-Score to the values I got after using an outlier detection algorithm, but the values were very different. Then I realized that your metrics doesn't take the label defined but the algorithm as a parameter, only the ground-truth and the decision scores. After checking your algorithm I found, that you label data points as an anomaly if the decision score deviates from three times the standard deviation from the mean of the data. That was a bit confusing, since I thought I could use your metrics to evaluate how good my anomaly detection labeling was, but the anomaly detection algorithm I was using has a different strategy to label a data point as an outlier, so the values were different.
Is it possibly that you optionally also take the label from the algorithm as a parameter or at least explain that in the description, that the recall, precision, f1 might be different because of your own logic?
The strategy to use three times the standard deviation as a criterion for the labeling was also mentioned in your paper I remember, but as a user (especially if you didn't read the paper) it's still a bit confusing I found.

Some questions about datasets and preprocessing

Many thanks for your work and repo. I learned a lot. I am wondering about the datasets and their preprocessing.

Such as the MSL dataset, there are many data files in the corresponding folder, but a single file is a single variable data, i.e. the first dimension is X and the second dimension is label. But as far as I know, the MSL dataset has 55 dimensions. So I would like to know whether your work is univariate anomaly detection. Is there any further preprocessing?

find_length for window length only works for 1D data

It seems to me, that the function "find_length" only works, when the data is one dimensional. If this is on purpose it would be nice to mention it in the docu.

def find_length(data):
if len(data.shape)>1:
return 0

Pip install doesn't work

When I use "pip install vus" and try the sample code I get the following message:
"ImportError: cannot import name 'get_metrics' from 'vus.metrics' (.\envs\test-vus\lib\site-packages\vus\metrics.py)"
When I check the file mentioned above I see, that the source code looks different than in this repo. It only contains:

`from .utils.metrics import metricor
from .analysis.robustness_eval import generate_curve

def get_range_vus_roc(score, labels, slidingWindow):
grader = metricor()
R_AUC_ROC, R_AUC_PR, _, _, _ = grader.RangeAUC(labels=labels, score=score, window=slidingWindow, plot_ROC=True)
_, _, _, _, _, _,VUS_ROC, VUS_PR = generate_curve(labels, score, 2*slidingWindow)
metrics = {'R_AUC_ROC': R_AUC_ROC, 'R_AUC_PR': R_AUC_PR, 'VUS_ROC': VUS_ROC, 'VUS_PR': VUS_PR}

return metrics`

The function "get_metrics" doesn't exist. If I copy the files manually into the folder, the sample codes works.
I hope I did nothing wrong during installation.
Also I had compatability issues with my other code, it needs special versions of scikit-learn and also a special python version (like mentioned in this repo). But in this case it doesn't run together with other packages like pyod. I will try to use the evaluation in another program installed in a virtual environment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.