Giter Site home page Giter Site logo

udaylab / pami Goto Github PK

View Code? Open in Web Editor NEW
205.0 5.0 91.0 123.73 MB

PAMI is a Python library containing 100+ algorithms to discover useful patterns in various databases across multiple computing platforms. (Active)

Home Page: https://udaylab.github.io/PAMI/

License: GNU General Public License v3.0

Jupyter Notebook 63.49% Python 6.12% C++ 0.02% Shell 0.01% Makefile 0.01% Batchfile 0.01% HTML 30.27% JavaScript 0.06% CSS 0.03%
frequent-itemsets frequent-pattern-mining python sequence-mining periodic-patterns periodicity pattern-mining pattern-recognition frequent-subgraphs stream-mining

pami's Introduction

PyPI PyPI - Python Version GitHub license PyPI - Implementation Documentation Status PyPI - Wheel PyPI - Status GitHub issues GitHub forks GitHub stars Downloads Downloads Downloads

Click here for more information


Table of Contents


Introduction

PAttern MIning (PAMI) is a Python library containing several algorithms to discover user interest-based patterns in a wide-spectrum of datasets across multiple computing platforms. Useful links to utilize the services of this library were provided below:

  1. Youtube tutorial https://www.youtube.com/playlist?list=PLKP768gjVJmDer6MajaLbwtfC9ULVuaCZ

  2. Tutorials (Notebooks) https://github.com/UdayLab/PAMI/tree/main/notebooks

  3. User manual https://udaylab.github.io/PAMI/manuals/index.html

  4. Coders manual https://udaylab.github.io/PAMI/codersManual/index.html

  5. Code documentation https://pami-1.readthedocs.io

  6. Datasets https://u-aizu.ac.jp/~udayrage/datasets.html

  7. Discussions on PAMI usage https://github.com/UdayLab/PAMI/discussions

  8. Report issues https://github.com/UdayLab/PAMI/issues


Process Flow Chart

PAMI's production process


Recent Updates

  • Version 2023.07.07: New algorithms: cuApriroi, cuAprioriBit, cuEclat, cuEclatBit, gPPMiner, cuGPFMiner, FPStream, HUPMS, SHUPGrowth New codes to generate synthetic databases
  • Version 2023.06.20: Fuzzy Partial Periodic, Periodic Patterns in High Utility, Code Documentation, help() function Update
  • Version 2023.03.01: prefixSpan and SPADE

Total number of algorithms: 83


Features

  • βœ… Well-tested and production-ready
  • πŸ”‹ Highly optimized to our best effort, light-weight, and energy-efficient
  • πŸ‘€ Proper code documentation
  • 🍼 Ample examples of using various algorithms at ./notebooks folder
  • πŸ€– Works with AI libraries such as TensorFlow, PyTorch, and sklearn.
  • ⚑️ Supports Cuda and PySpark
  • πŸ–₯️ Operating System Independence
  • πŸ”¬ Knowledge discovery in static data and streams
  • 🐎 Snappy
  • 🐻 Ease of use

Maintenance

Installation

  1. Installing basic pami package (recommended)

    pip install pami
    
  2. Installing pami package in a GPU machine that supports CUDA

    pip install 'pami[gpu]'
    
  3. Installing pami package in a distributed network environment supporting Spark

    pip install 'pami[spark]'
    
  4. Installing pami package for developing purpose

    pip install 'pami[dev]'
    
  5. Installing complete Library of pami

    pip install 'pami[all]'
    

Upgradation

    pip install --upgrade pami

Uninstallation

    pip uninstall pami 

Information

    pip show pami

Try your first PAMI program

$ python
# first import pami 
from PAMI.frequentPattern.basic import FPGrowth as alg
fileURL = "https://u-aizu.ac.jp/~udayrage/datasets/transactionalDatabases/Transactional_T10I4D100K.csv"
minSup=300
obj = alg.FPGrowth(iFile=fileURL, minSup=minSup, sep='\t')
obj.startMine()
obj.save('frequentPatternsAtMinSupCount300.txt')
frequentPatternsDF= obj.getPatternsAsDataFrame()
print('Total No of patterns: ' + str(len(frequentPatternsDF))) #print the total number of patterns
print('Runtime: ' + str(obj.getRuntime())) #measure the runtime
print('Memory (RSS): ' + str(obj.getMemoryRSS()))
print('Memory (USS): ' + str(obj.getMemoryUSS()))
Output:
Frequent patterns were generated successfully using frequentPatternGrowth algorithm
Total No of patterns: 4540
Runtime: 8.749667644500732
Memory (RSS): 522911744
Memory (USS): 475353088

Evaluation:

  1. we compared three different Python libraries such as PAMI, mlxtend and efficient-apriori for Apriori.
  2. (Transactional_T10I4D100K.csv)is a transactional database downloaded from PAMI and used as an input file for all libraries.
  3. Minimum support values and seperator are also same.
  • The performance of the Apriori algorithm is shown in the graphical results below:
  1. Comparing the Patterns Generated by different Python libraries for the Apriori algorithm:

    Screenshot 2024-04-11 at 13 31 31
  2. Evaluating the Runtime of the Apriori algorithm across different Python libraries:

    Screenshot 2024-04-11 at 13 31 20
  3. Comparing the Memory Consumption of the Apriori algorithm across different Python libraries:

    Screenshot 2024-04-11 at 13 31 08

For more information, we have uploaded the evaluation file in two formats:


Reading Material

For more examples, refer this YouTube link YouTube


License

GitHub license


Documentation

The official documentation is hosted on PAMI.


Background

The idea and motivation to develop PAMI was from Kitsuregawa Lab at the University of Tokyo. Work on PAMI started at University of Aizu in 2020 and has been under active development since then.


Getting Help

For any queries, the best place to go to is Github Issues GithubIssues.


Discussion and Development

In our GitHub repository, the primary platform for discussing development-related matters is the university lab. We encourage our team members and contributors to utilize this platform for a wide range of discussions, including bug reports, feature requests, design decisions, and implementation details.


Contribution to PAMI

We invite and encourage all community members to contribute, report bugs, fix bugs, enhance documentation, propose improvements, and share their creative ideas.


Tutorials

0. Association Rule Mining

Basic
Confidence Open In Colab
Lift Open In Colab
Leverage Open In Colab

1. Pattern mining in binary transactional databases

1.1. Frequent pattern mining: Sample

Basic Closed Maximal Top-k CUDA pyspark
Apriori Open In Colab CHARM Open In Colab maxFP-growth Open In Colab FAE Open In Colab cudaAprioriGCT parallelApriori Open In Colab
FP-growth Open In Colab cudaAprioriTID parallelFPGrowth Open In Colab
ECLAT Open In Colab cudaEclatGCT parallelECLAT Open In Colab
ECLAT-bitSet Open In Colab
ECLAT-diffset Open In Colab

1.2. Relative frequent pattern mining: Sample

Basic
RSFP-growth Open In Colab

1.3. Frequent pattern with multiple minimum support: Sample

Basic
CFPGrowth Open In Colab
CFPGrowth++ Open In Colab

1.4. Correlated pattern mining: Sample

Basic
CoMine Open In Colab
CoMine++ Open In Colab

1.5. Fault-tolerant frequent pattern mining (under development)

Basic
FTApriori Open In Colab
FTFPGrowth (under development) Open In Colab

1.6. Coverage pattern mining (under development)

Basic
CMine Open In Colab
CMine++ Open In Colab

2. Pattern mining in binary temporal databases

2.1. Periodic-frequent pattern mining: Sample

Basic Closed Maximal Top-K
PFP-growth Open In Colab CPFP Open In Colab maxPF-growth Open In Colab kPFPMiner Open In Colab
PFP-growth++ Open In Colab Topk-PFP Open In Colab
PS-growth Open In Colab
PFP-ECLAT Open In Colab
PFPM-Compliments Open In Colab

2.2. Local periodic pattern mining: Sample

Basic
LPPGrowth (under development) Open In Colab
LPPMBreadth (under development) Open In Colab
LPPMDepth (under development) Open In Colab

2.3. Partial periodic-frequent pattern mining: Sample

Basic
GPF-growth Open In Colab
PPF-DFS Open In Colab
GPPF-DFS Open In Colab

2.4. Partial periodic pattern mining: Sample

Basic Closed Maximal topK CUDA
3P-growth Open In Colab 3P-close Open In Colab max3P-growth Open In Colab topK-3P growth Open In Colab cuGPPMiner (under development) Open In Colab
3P-ECLAT Open In Colab gPPMiner (under development) Open In Colab
G3P-Growth Open In Colab

2.5. Periodic correlated pattern mining: Sample

Basic
EPCP-growth Open In Colab

2.6. Stable periodic pattern mining: Sample

Basic TopK
SPP-growth Open In Colab TSPIN Open In Colab
SPP-ECLAT Open In Colab

2.7. Recurring pattern mining: Sample

Basic
RPgrowth Open In Colab

3. Mining patterns from binary Geo-referenced (or spatiotemporal) databases

3.1. Geo-referenced frequent pattern mining: Sample

Basic
spatialECLAT Open In Colab
FSP-growth Open In Colab

3.2. Geo-referenced periodic frequent pattern mining: Sample

Basic
GPFPMiner Open In Colab
PFS-ECLAT Open In Colab
ST-ECLAT Open In Colab

3.3. Geo-referenced partial periodic pattern mining:Sample

Basic
STECLAT Open In Colab

4. Mining patterns from Utility (or non-binary) databases

4.1. High utility pattern mining: Sample

Basic
EFIM Open In Colab
HMiner Open In Colab
UPGrowth Open In Colab

4.2. High utility frequent pattern mining: Sample

Basic
HUFIM Open In Colab

4.3. High utility geo-referenced frequent pattern mining: Sample

Basic
SHUFIM Open In Colab

4.4. High utility spatial pattern mining: Sample

Basic topk
HDSHIM Open In Colab TKSHUIM Open In Colab
SHUIM Open In Colab

4.5. Relative High utility pattern mining: Sample

Basic
RHUIM Open In Colab

4.6. Weighted frequent pattern mining: Sample

Basic
WFIM Open In Colab

4.7. Weighted frequent regular pattern mining: Sample

Basic
WFRIMiner Open In Colab

4.8. Weighted frequent neighbourhood pattern mining: Sample

Basic
SSWFPGrowth

5. Mining patterns from fuzzy transactional/temporal/geo-referenced databases

5.1. Fuzzy Frequent pattern mining: Sample

Basic
FFI-Miner Open In Colab

5.2. Fuzzy correlated pattern mining: Sample

Basic
FCP-growth Open In Colab

5.3. Fuzzy geo-referenced frequent pattern mining: Sample

Basic
FFSP-Miner Open In Colab

5.4. Fuzzy periodic frequent pattern mining: Sample

Basic
FPFP-Miner Open In Colab

5.5. Fuzzy geo-referenced periodic frequent pattern mining: Sample

Basic
FGPFP-Miner (under development) Open In Colab

6. Mining patterns from uncertain transactional/temporal/geo-referenced databases

6.1. Uncertain frequent pattern mining: Sample

Basic top-k
PUF Open In Colab TUFP
TubeP Open In Colab
TubeS Open In Colab
UVEclat

6.2. Uncertain periodic frequent pattern mining: Sample

Basic
UPFP-growth Open In Colab
UPFP-growth++ Open In Colab

6.3. Uncertain Weighted frequent pattern mining: Sample

Basic
WUFIM Open In Colab

7. Mining patterns from sequence databases

7.1. Sequence frequent pattern mining: Sample

Basic
SPADE Open In Colab
PrefixSpan Open In Colab

7.2. Geo-referenced Frequent Sequence Pattern mining

Basic
GFSP-Miner (under development) Open In Colab

8. Mining patterns from multiple timeseries databases

8.1. Partial periodic pattern mining (under development)

Basic
PP-Growth (under development) Open In Colab

9. Mining interesting patterns from Streams

  1. Frequent pattern mining
Basic
to be written
  1. High utility pattern mining
Basic
HUPMS

10. Mining patterns from contiguous character sequences (E.g., DNA, Genome, and Game sequences)

10.1. Contiguous Frequent Patterns

Basic
PositionMining Open In Colab

11. Mining patterns from Graphs

11.1. Frequent sub-graph mining

Basic topk
Gspan Open In Colab TKG Open In Colab

12. Additional Features

12.1. Creation of synthetic databases

Database type
Transactional database Open In Colab
Temporal database Open In Colab
Utility database (coming soon)

12.2. Converting a dataframe into a specific database type

Approaches
Dense dataframe to databases (coming soon)
Sparse dataframe to databases (coming soon)

12.3. Gathering the statistical details of a database

Approaches
Transactional database (coming soon)
Temporal database (coming soon)
Utility database (coming soon)

12.4. Generating Latex code for the experimental results

Approaches
Latex code (coming soon)

Real World Case Studies

  1. Air pollution analytics Open In Colab

Go to Top

pami's People

Contributors

avvari1830s avatar charan-teja2003 avatar choubeyy avatar dependabot[bot] avatar harsha0632 avatar kundai-kwangwari avatar lasyapalla avatar likhitha-palla avatar nakamura204 avatar pallamadhavi avatar pradeepppc avatar raashika214 avatar raviua138 avatar saichitrab avatar saideepchennupati avatar sandeep0509 avatar shiridikumar avatar suzuki-zudai avatar tarun-sreepada avatar tejcodes avatar udayrage avatar vanithakattumuri avatar yerragollahareeshkumar avatar yukimaru5310 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pami's Issues

Bug in createTemporal(),

When we convert a data frame into a temporal database, the first column, i.e., timestamp in the constructed database contains 0.

However, we all know that the timestamp of the first transaction will always be greater than or equal 1. (Never cannot be zero).

So we have to change the code so that the temporal database contains timestamp starting from 1 (and from 0).

Check Step 6 in https://github.com/UdayLab/PAMI/blob/main/notebooks/periodicFrequentPatternMiningPollutionDemo.ipynb

           !head -5 temporalDatabasePM25HeavyPollution.csv

Bug in printing temporalDatabaseStats.

The below program prints an item's minimum, average, and maximum periodicity in the database as 1, 1, and 1, respectively.

URL of the notebook: https://colab.research.google.com/github/UdayLab/PAMI/blob/main/notebooks/parallelFPGrowth.ipynb


#import the class file
import PAMI.extras.dbStats.temporalDatabaseStats as stats

#specify the file name
inputFile = 'Temporal_T10I4D100K.csv'

#initialize the class
obj=stats.temporalDatabaseStats(inputFile,sep='\t')

#execute the class
obj.run()

#Printing each of the database statistics
print(f'Database size : {obj.getDatabaseSize()}')
print(f'Total number of items : {obj.getTotalNumberOfItems()}')
print(f'Database sparsity : {obj.getSparsity()}')
print(f'Minimum Transaction Size : {obj.getMinimumTransactionLength()}')
print(f'Average Transaction Size : {obj.getAverageTransactionLength()}')
print(f'Maximum Transaction Size : {obj.getMaximumTransactionLength()}')
print(f'Standard Deviation Transaction Size : {obj.getStandardDeviationTransactionLength()}')
print(f'Variance in Transaction Sizes : {obj. getVarianceTransactionLength()}')
print(f'Minimum period : {obj.getMinimumPeriod()}')
print(f'Average period : {obj.getAveragePeriod()}')
print(f'Maximum period : {obj.getMaximumPeriod()}')

itemFrequencies = obj.getSortedListOfItemFrequencies()
transactionLength = obj.getTransanctionalLengthDistribution()
numberOfTransactionPerTimeStamp = obj.getNumberOfTransactionsPerTimestamp()
obj.save(itemFrequencies,'itemFrequency.csv')
obj.save(transactionLength, 'transactionSize.csv')
obj.save(numberOfTransactionPerTimeStamp, 'numberOfTransaction.csv')

Bug in parallelECLAT


TypeError Traceback (most recent call last)
Cell In[8], line 1
----> 1 obj = alg.parallelECLAT(iFile=inputFile, minSup=minimumSupportCount,numWorkers=mumberWorkersCount, sep=seperator) #initialize
2 obj.startMine() #Start the mining process

TypeError: Can't instantiate abstract class parallelECLAT with abstract methods printResults, save

Association Rules as method instead of subclass

Hey there, nice stuff so far.
I am a bit confused as to why there is no (convenient/clear) option to acquire the assotiation rules and rank then according to lift after running a basic frequent patern mining algorithm. Instead its hidden inside a separate class that spceifically creates association rules rather than it being an extension to any algorithm run. Could you consider adapting this to become a general method of the basic pattern miners?

parallelFPGrowth seems to ignore sep

Greetings,

I was trying to use parallelFPGrowth on a space separated database w/o any success. Then I noticed that in:

.map(lambda x: x.rstrip().split('\t'))\

One has:

rdd = sc.textFile(self._iFile, self._numPartitions)\ .map(lambda x: x.rstrip().split('\t'))\ .persist()

I believe the correct implementation is:

rdd = sc.textFile(self._iFile, self._numPartitions)\ .map(lambda x: x.rstrip().split(self._sep))\ .persist()

PAMI/extras/DF2DB/denseDF2DB.py

Dear Sir,
I am encountering an indentation error when importing this code. Specifically, there is an indentation issue in the block of code for createTransactional, after the else statement. I kindly request your assistance in resolving this matter promptly.
Thank you for your attention to this matter.
Sincerely,
Ashutosh Kumar

def createTransactional(self, outputFile):
"""
:Description: Create transactional data base

     :param outputFile: str :
          Write transactional data base into outputFile

    """

    self.outputFile = outputFile
    with open(outputFile, 'w') as f:
         if self.condition not in condition_operator:
            print('Condition error')
         else:
            for tid in self.tids:
                transaction = [item for item in self.items if condition_operator[self.condition](self.inputDF.at[tid, item], self.thresholdValue)]
                if len(transaction) > 1:
                    f.write(f'{transaction[0]}')
                    for item in transaction[1:]:
                        f.write(f'\t{item}')
                elif len(transaction) == 1:
                    f.write(f'{transaction[0]}')
                else:
                    continue
                f.write('\n')

Questions on how to use it

Hello, I am a researcher that recently encountered a problem which requires me to use sequence pattern mining algorithm, so I found this package which is perfect. However, I still have some issues using it because there is too little information and documentation on this project, I don't know how to do the visualization and how to switch algorithms. It would be great if there is more manual, tutorial, etc.

Need to combine the algorithms in frequentSpatialPattern and geoReferencedFrequentPattern

In frequentSpatialPattern sub-package, we have basic folder and algorithms, e.g., FSP-growth.

In geoReferencedFrequentPattern sub-package, we have one algorithm GFP-growth.

  1. We need to check whether FSP-growth and GFP-growth algorithms are different algorithms are same?

  2. We have to remove frequentSpatialPattern sub-package and move the algorithms to geoReferencedFrequentPattern

Bug in RSFP-growth (PAMI.relativeFrequentPattern.basic)


TypeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 obj = alg.RSFPGrowth(iFile=inputFile, minSup=minimumSupportCount, minRatio=minRatioEx,sep=seperator) #initialize
2 obj.startMine() #Start the mining process

TypeError: init() got an unexpected keyword argument 'minSup'

Bug in maxFPgrowth algorithm


TypeError Traceback (most recent call last)
Cell In[8], line 2
1 obj = alg.MaxFPGrowth(iFile=inputFile, minSup=minimumSupportCount, sep=seperator) #initialize
----> 2 obj.startMine() #Start the mining process

File ~/Library/CloudStorage/Dropbox/Github/PAMI_new/PAMI/frequentPattern/maximal/MaxFPGrowth.py:661, in MaxFPGrowth.startMine(self)
659 self._finalPatterns = {}
660 self._maximalTree = _MPTree()
--> 661 Tree = self._buildTree(updatedTransactions, info, self._maximalTree)
662 Tree.generatePatterns([], patterns)
663 for x, y in patterns.items():

TypeError: _buildTree() takes 2 positional arguments but 3 were given

Unable to run the fuzzy periodic frequent pattern (FPFP) algorithm

Hi, thank you for developing such a wonderful open-source library for Pattern Mining.
I am using the FPFP algorithm and face some problems:

  • The data format from the doc (https://udayrage.github.io/PAMI/fuzzyPeriodicFrequentPatternMining.html) does not work
    Particularly, for example, each row (transaction) from the website only has 1 colon(:) for separating between item and fuzzy value. However, with this format, the algorithm return an error (which inferred as an additional colon (:) is needed)
    ....
  • I have also read your paper (*) and fuzzying values from transactional database as written but it seems not right to your implemented algorithm (as I inspect to the code).
  • I have also visit your website to search for example of fuzzy database (https://u-aizu.ac.jp/~udayrage/datasets.html). However, nothing helps.

Can you please provide me the correct format of the data for this FPFP algorithm as well as explaination for how to create that format with a simple example?

Thanks in advance

(*) Kiran, R. Uday, et al. "Discovering fuzzy periodic-frequent patterns in quantitative temporal databases." 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2020.

Bug in generating statistics of the temporal database

from PAMI.extras.dbStats import temporalDatabaseStats as tempDS
obj = tempDS.temporalDatabaseStats('temporalDatabasePM25HeavyPollution.csv',sep=',')
obj.run()
obj.printStats()
obj.plotGraphs(). <--- error


TypeError Traceback (most recent call last)
in <cell line: 5>()
3 obj.run()
4 obj.printStats()
----> 5 obj.plotGraphs()

1 frames
/usr/local/lib/python3.10/dist-packages/PAMI/extras/graph/plotLineGraphFromDictionary.py in init(self, data, end, start, title, xlabel, ylabel)
31 """
32 end = int(len(data) * end / 100)
---> 33 start = int(len(data) * start / 100)
34 x = tuple(data.keys())[start:end]
35 y = tuple(data.values())[start:end]

TypeError: unsupported operand type(s) for /: 'str' and 'int'

Unmentioned Constraints for denseDF2DB Class

In this line, it is expected to have a column named "tid." However, the documentation does not mention anything about it, does it? The documentation states: inputDataFrame - the dataframe that needs to be converted into a database.

https://github.com/udayRage/PAMI/blob/681a7e66f1ce14a50b40278935d91e87aba676d2/PAMI/extras/DF2DB/denseDF2DB.py#L39

Furthermore, in the following line, the items are taken from the first column. Is this because it assumes that column index 0 is the timestamp? If I manually remove the timestamp in the dataframe, I will be missing one column.

https://github.com/udayRage/PAMI/blob/681a7e66f1ce14a50b40278935d91e87aba676d2/PAMI/extras/DF2DB/denseDF2DB.py#L40

Bug in PAMI/fuzzyCorrelatedPattern/basic/FCPGrowth.py --> list index out of range

[/usr/local/lib/python3.10/dist-packages/PAMI/fuzzyCorrelatedPattern/basic/FCPGrowth.py] in _creatingItemSets(self)
420 parts = line.split(":")
421 items = parts[0].split()
--> 422 quantities = parts[2].split()
423 self._transactions.append([x for x in items])
424 self._fuzzyValues.append([x for x in quantities])

IndexError: list index out of range

Bug in PAMI.extras.graph.visualizePatterns.py

from PAMI.extras.graph import visualizePatterns as fig

obj = fig.visualizePatterns('soramame_frequentPatterns.txt',10)
obj.visualize()


ValueError Traceback (most recent call last)
in <cell line: 4>()
2
3 obj = fig.visualizePatterns('soramame_frequentPatterns.txt',10)
----> 4 obj.visualize()

/usr/local/lib/python3.10/dist-packages/PAMI/extras/graph/visualizePatterns.py in visualize(self)
62 temp = points[i].split()
63 if i % 2 == 0:
---> 64 lat.append(float(temp[0]))
65 name.append(freq)
66 color.append("#" + RHex + GHex + BHex)

ValueError: could not convert string to float: 'oint(130.7998865'

Bug in parallelApriori


TypeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 obj = alg.parallelApriori(iFile=inputFile, minSup=minimumSupportCount,numWorkers=mumberWorkersCount, sep=seperator) #initialize
2 obj.startMine() #Start the mining process

TypeError: Can't instantiate abstract class parallelApriori with abstract methods printResults, save

Coverage patterns, need to check the following code

if name=="main":
_ap = str()
if len(_ab._sys.argv) == 7 or len(_ab._sys.argv) == 6:
if len(_ab._sys.argv) == 7:
_ap = CMine(_ab._sys.argv[1], _ab._sys.argv[3], _ab._sys.argv[4], _ab._sys.argv[5], _ab._sys.argv[6])
if len(_ab._sys.argv) == 6:
_ap = CMine(_ab._sys.argv[1], _ab._sys.argv[3], _ab._sys.argv[4], _ab._sys.argv[5])
_ap.startMine()
print("Total number of coverage Patterns:", len(_ap.getPatterns()))
_ap.save(_ab._sys.argv[2])
print("Total Memory in USS:", _ap.getMemoryUSS())
print("Total Memory in RSS", _ap.getMemoryRSS())
print("Total ExecutionTime in ms:", _ap.getRuntime())
else:
print("Error! The number of input parameters do not match the total number of parameters provided")

Error on converting a sparse dataframe into a transactional database

When trying to convert a sparse dataframe into a transactional database, through the code provided on link the following error appears : " AttributeError: module 'PAMI.extras.DF2DB.sparseDF2DB' has no attribute 'sparse2DB'. "

Firstly, I simply change the word sparse2DB to sparseDF2DB, but then a different error appears " ValueError: DataFrame constructor not properly called! "
My dataframe was already imported into the Jupyter notebook when I called it to the function, however, I also tried to save it and export it as an excel file and import it directly on the function, however, nothing worked and the error persisted.

Can you please help?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.