Giter Site home page Giter Site logo

ulf1 / maxjoshua Goto Github PK

View Code? Open in Web Editor NEW
1.0 3.0 1.0 841 KB

Feature selection for hard voting classifier and NN sparse weight initialization.

License: Apache License 2.0

Python 100.00%
ensemble-learning hard-voting-classifier feature-selection binary-classification bootstrapping python sparse-neural-network neural-network-initialization

maxjoshua's Introduction

PyPI version

maxjoshua

Feature selection for hard voting classifier and NN sparse weight initialization.

Preface

I am naming this software package in memory of my late nephew Max Joshua Hamster (* 2005 to โ€  June 18, 2022).

Usage

Forward Selection for Hard Voting Classifier

Load toy data set and convert features to binary.

from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import scale
X = scale(load_breast_cancer().data, axis=0) > 0  # convert to binary features
y = load_breast_cancer().target

Select binary features. Each row in the results list contains the n_select column indices of X, the notice if the binary features were negated, and the sum of absolute MCC correlation coeffcients between the selected features.

import maxjoshua as mh
idx, neg, rho, results = mh.binsel(
    X, y, preselect=0.8, oob_score=True, subsample=0.5, 
    n_select=5, unique=True, n_draws=100, random_state=42)

Algorithm. The task is to select e.g. n_select features from a pool of many features. These features might be the prediction of binary classifiers. The selected features are then combined into one hard-voting classifier.

A voting classifier should have the following properties

  • each voter (a binary feature) should be highly correlated to the target variable
  • the selected features should be uncorrelated.

The algorithm works as follows

  1. Generate multiple correlation matrices by bootstrapping. This includes corr(X_i, X_j) as well as corr(Y, X_i) computation. Also store the oob samples for evaluation.
  2. For each correlation matrix do ... a. Preselect the i* with the highest abs(corr(Y, X_i)) estimates (e.g. pick the n_pre=? highest absolute correlations) b. Slice a correlation matrix corr(X_i*, X_j*) and find the least correlated combination of n_select features. (see korr.mincorr) c. Compute the out-of-bag (OOB) performance (see step 1) of the hard-voter with the selected n_select=? features
  3. Select the feature combination with the best OOB performance as final model.

Forward Selection for Linear Regression

Load toy dataset.

from sklearn.preprocessing import scale
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = scale(housing["data"], axis=0)
y = scale(housing["target"])

Select real-numbered features. Each row in the results list contains the n_select column indices of X, the ridge regression coefficents beta and the RMSE loss. Warning! Please note that the features X and the target y must be scaled because mh.fltsel uses an L2-penalty on beta coefficients, and doesn't used an intercept term to shift y.

import maxjoshua as mh
from sklearn.preprocessing import scale

idx, beta, loss, results = mh.fltsel(
    scale(X), scale(y), preselect=0.8, oob_score=True, subsample=0.5, 
    n_select=5, unique=True, n_draws=100, random_state=42, l2=0.01)

Initialize Sparse NN Layer

The idea is to run mh.fltsel to generate an ensemble of linear models, and combine them in a sparse linear neural network layer, i.e., the number of output neurons is the ensemble size. In case of small datasets, the sparse NN layer is non-trainable because because each submodel was already estimated and selected with two-way data splits in mh.fltsel (see oob_scores and subsample). The sparse NN layers basically produces submodel predictions for meta model in the next layer, i.e., a simple dense linear layer. The inputs of the sparse NN layer must be normalized for which a layer normalization layers is trained.

import maxjoshua as mh
import tensorflow as tf
import sklearn.preprocessing

# create toy dataset
import sklearn.datasets
X, y = sklearn.datasets.make_regression(
    n_samples=1000, n_features=100, n_informative=20, n_targets=3)

# feature selection
# - always scale the inputs and targets -
indices, values, num_in, num_out = mh.pretrain_submodels(
    sklearn.preprocessing.scale(X), 
    sklearn.preprocessing.scale(y), 
    num_out=64, n_select=3)

# specify the model
model = tf.keras.models.Sequential([
    # sub-models
    mh.SparseLayerAsEnsemble(
        num_in=num_in, 
        num_out=num_out, 
        sp_indices=indices, 
        sp_values=values,
        sp_trainable=False,
        norm_trainable=True,
    ),
    # meta model
    tf.keras.layers.Dense(
        units=3, use_bias=False,
        # kernel_constraint=tf.keras.constraints.NonNeg()
    ),
    # scale up
    mh.InverseTransformer(
        units=3,
        init_bias=y.mean(), 
        init_scale=y.std()
    )
])
model.compile(
    optimizer=tf.keras.optimizers.Adam(
        learning_rate=3e-4, beta_1=.9, beta_2=.999, epsilon=1e-7, amsgrad=True),
    loss='mean_squared_error'
)

# train
history = model.fit(X, y, epochs=3)

Appendix

Installation

The maxjoshua git repo is available as PyPi package

pip install maxjoshua

Install a virtual environment

python3.7 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
pip install -r requirements-demo.txt

(If your git repo is stored in a folder with whitespaces, then don't use the subfolder .venv. Use an absolute path without whitespaces.)

Python commands

  • Jupyter for the examples: jupyter lab
  • Check syntax: flake8 --ignore=F401 --exclude=$(grep -v '^#' .gitignore | xargs | sed -e 's/ /,/g')
  • Run Unit Tests: pytest

Publish

pandoc README.md --from markdown --to rst -s -o README.rst
python setup.py sdist 
twine upload -r pypi dist/*

Clean up

find . -type f -name "*.pyc" | xargs rm
find . -type d -name "__pycache__" | xargs rm -r
rm -r .venv

Support

Please open an issue for support.

Contributing

Please contribute using Github Flow. Create a branch, add commits, and open a pull request.

maxjoshua's People

Contributors

ulf1 avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

Forkers

lgtm-migrator

maxjoshua's Issues

add sklearn API BinSel

from from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from binsel import BinSel

pipe = Pipeline(steps=[
    ('level1', FeatureUnion(transformer_list=[
         ('clf1', GaussianNB()),
         ('clf2', KNeighborsClassifier()),
         ('clf3', LogisticRegression()),
     ])),
     ('level2', BinSel(preselect=0.5))
])

pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)

add gitpod badge

[![Gitpod - Code Now](https://img.shields.io/badge/Gitpod-code%20now-blue.svg?longCache=true)](https://gitpod.io#https://github.com/kmedian/binsel)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.