Giter Site home page Giter Site logo

keras-team / keras-tuner Goto Github PK

View Code? Open in Web Editor NEW
2.8K 61.0 384.0 2.11 MB

A Hyperparameter Tuning Library for Keras

Home Page: https://keras.io/keras_tuner/

License: Apache License 2.0

Python 99.62% Dockerfile 0.01% Shell 0.36%
automl deep-learning hyperparameter-optimization keras machine-learning tensorflow

keras-tuner's Introduction

KerasTuner

codecov PyPI version

KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily configure your search space with a define-by-run syntax, then leverage one of the available search algorithms to find the best hyperparameter values for your models. KerasTuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms built-in, and is also designed to be easy for researchers to extend in order to experiment with new search algorithms.

Official Website: https://keras.io/keras_tuner/

Quick links

Installation

KerasTuner requires Python 3.8+ and TensorFlow 2.0+.

Install the latest release:

pip install keras-tuner

You can also check out other versions in our GitHub repository.

Quick introduction

Import KerasTuner and TensorFlow:

import keras_tuner
from tensorflow import keras

Write a function that creates and returns a Keras model. Use the hp argument to define the hyperparameters during model creation.

def build_model(hp):
  model = keras.Sequential()
  model.add(keras.layers.Dense(
      hp.Choice('units', [8, 16, 32]),
      activation='relu'))
  model.add(keras.layers.Dense(1, activation='relu'))
  model.compile(loss='mse')
  return model

Initialize a tuner (here, RandomSearch). We use objective to specify the objective to select the best models, and we use max_trials to specify the number of different models to try.

tuner = keras_tuner.RandomSearch(
    build_model,
    objective='val_loss',
    max_trials=5)

Start the search and get the best model:

tuner.search(x_train, y_train, epochs=5, validation_data=(x_val, y_val))
best_model = tuner.get_best_models()[0]

To learn more about KerasTuner, check out this starter guide.

Contributing Guide

Please refer to the CONTRIBUTING.md for the contributing guide.

Thank all the contributors!

The contributors

Community

Ask your questions on our GitHub Discussions.

Citing KerasTuner

If KerasTuner helps your research, we appreciate your citations. Here is the BibTeX entry:

@misc{omalley2019kerastuner,
	title        = {KerasTuner},
	author       = {O'Malley, Tom and Bursztein, Elie and Long, James and Chollet, Fran\c{c}ois and Jin, Haifeng and Invernizzi, Luca and others},
	year         = 2019,
	howpublished = {\url{https://github.com/keras-team/keras-tuner}}
}

keras-tuner's People

Contributors

airvzxf avatar alcantarar avatar alexlau811 avatar anselmoo avatar aviously avatar ben-arnao avatar brydon avatar c-pro avatar carchrae avatar chongyouquan avatar ebursztein avatar fchollet avatar floscha avatar gabrieldemarmiesse avatar haifeng-jin avatar invernizzi-at-google avatar jamesmullenbach avatar jamlong avatar jpodivin avatar lunox avatar nkovela1 avatar omalleyt12 avatar onponomarev avatar pnacht avatar sagipe avatar sfo avatar themrzmaster avatar thibaultmax avatar tomer23 avatar yixingfu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-tuner's Issues

Contributing to Keras-Tuner

Great to see hyperparameter tuning is handled at the source going forward :)

I've spent some years working on the problem specifically focused on Keras models, some of which is visible in Talos. Related with this, I have two questions:

  1. Is there something I can help with?

  2. Can I learn more about Keras-Tuner roadmap/strategy somewhere, as it will help us align our development efforts better with Talos, where the focus is more broadly the automation / workflow improvements as opposed to strictly finding hyperparameters for a Keras model

ImportError: cannot import name 'HyperResNet'

Hi,

I am trying to run tunable_resnet_cifar10 and tunable_xception_cifar10 in tutorials but I have the same error when importing HyperResNet from kerastuner.applications

ImportError: cannot import name 'HyperResNet'

I tried to change to this

from kerastuner.applications.resnet import HyperResNet

Everything is good until I run this code

# Import an hypertunable version of Resnet.
hypermodel = HyperResNet(
    input_shape=x_train.shape[1:],
    num_classes=NUM_CLASSES)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-3-5f0422d56ac7> in <module>
      1 # Import an hypertunable version of Resnet.
----> 2 hypermodel = HyperResNet(
      3     input_shape=x_train.shape[1:],
      4     num_classes=NUM_CLASSES)

NameError: name 'HyperResNet' is not defined

I hope you can show me what I did wrong.

Thanks

示例教程报错了

(x_train, y_train), (x_test, y_test) = load_data()

import autokeras as ak
import tensorflow as tf
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
#x_train = tf.cast(x_train.reshape(x_train.shape + (1,)),tf.float64)
#x_test = tf.cast(x_test.reshape(x_test.shape + (1,)),tf.float64)

clf = ak.ImageClassifier(max_trials=100)
clf.fit(x_train, y_train)
y = clf.predict(x_test, y_test)

File "C:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\autokeras\auto_model.py", line 146, in prepare_data
x_val, y_val = validation_data
TypeError: 'NoneType' object is not iterable

添加上这一句validation_split=0.05 报错需要tf.float64

转换为float64后
#x_train = tf.cast(x_train.reshape(x_train.shape + (1,)),tf.float64)
#x_test = tf.cast(x_test.reshape(x_test.shape + (1,)),tf.float64)

tensorflow.python.framework.errors_impl.InvalidArgumentError: Length for attr 'output_types' of 0 must be at least minimum 1
; NodeDef: {{node ZipDataset}}; Op<name=ZipDataset; signature=input_datasets:N*variant -> handle:variant; attr=output_types:list(type),min=1; attr=output_shapes:list(shape),min=1; attr=N:int,min=1> [Op:ZipDataset]

环境tensorflow2.0.0rc1
win10
py3.6

How to carry the best model forward and question about future implementation

Hi all,

thanks for this tool it is incredibly useful! A few follow up questions for clarification as they are not immediately clear.

After hyper-optimisation, is there a way to carry the best model forward as the working model to then save the model etc and do further downstream analysis (plots, evaluation deployment etc)..

and is there planned integration with tensor board for easy visualisation of hyper-optimisation?

thanks!

Request for roadmap / projects for contributions

Thank you for building this yet another awesome software for humans :-)
Can I request issues / projects listing roadmap / call for contributions? This would help many of us to get involved easily by starting with tasks that is necessary for the project.

Thanks,

Bug in MetricsTracker

elif self.directions[name] == 'min' and value <= np.max(history):

Hi, I found a bug here in MetricsTracker's update function. The validation loss
of the best model returned by tuner.get_best_model with the best validation loss in
tuner.results_summary() was different. The reason is that the update function
returns true if the current value is <= than the max value in history. This should be
<= than the min value in history.

tuner.search with model.fit_generator

In tensorflow we often use tf.data pipeline to generate some data, i hope that we can use the model.fit_generator to fit the model with provided steps_per_epoch.

Weights for model sequential have not yet been created error

I'm getting a weights error when run a simple tuner.

  • ENV : tf version 22.0.0-beta1
    keras 2.2.4-tf

  • RUNNING on Kaggle GPU powered

class FBPDHyperModel(HyperModel):
    def __init__(self,features):
        self.features_layers = features # DenseFeatures Layer         

    def build(self, hp):
        model = tf.keras.Sequential()
        model.add(self.features_layers)
        model.add(layers.Dense(units=hp.Range('units',32, 512, 32),activation='relu'))
        model.add(layers.Dense(3, activation='softmax'))
        model.compile(optimizer=tf.keras.optimizers.Adam(),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
        return model  

tuner = RandomSearch(
    FBPDHyperModel(features=features_layer),
    objective='val_accuracy',
    max_trials=5,
    directory='tuner_dir')

On running, I get this error


ValueError Traceback (most recent call last)
in
3 objective='val_accuracy',
4 max_trials=5,
----> 5 directory='tuner_dir')

/opt/conda/lib/python3.6/site-packages/kerastuner/tuners/randomsearch.py in init(self, hypermodel, objective, max_trials, seed, **kwargs)
120 objective,
121 max_trials,
--> 122 **kwargs)

/opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in init(self, oracle, hypermodel, objective, max_trials, executions_per_trial, max_model_size, optimizer, loss, metrics, hyperparameters, tune_new_entries, allow_new_entries, distribution_strategy, directory, project_name)
200 # Populate initial search space
201 if not self.hyperparameters.space and self.tune_new_entries:
--> 202 self._build_model(self.hyperparameters)
203
204 def search(self, *fit_args, **fit_kwargs):

/opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in _build_model(self, hp)
599
600 # Check model size.
--> 601 size = utils.compute_model_size(model)
602 if self.max_model_size and size > self.max_model_size:
603 oversized_streak += 1

/opt/conda/lib/python3.6/site-packages/kerastuner/utils.py in compute_model_size(model)
21 def compute_model_size(model):
22 "comput the size of a given model"
---> 23 params = [K.count_params(p) for p in set(model.trainable_weights)]
24 return int(np.sum(params))
25

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in trainable_weights(self)
578 @Property
579 def trainable_weights(self):
--> 580 self._assert_weights_created()
581 return trackable_layer_utils.gather_trainable_weights(
582 trainable=self.trainable,

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in _assert_weights_created(self)
1662 'Weights are created when the Model is first called on '
1663 'inputs or build() is called with an input_shape.' %
-> 1664 self.name)
1665
1666 @Property

ValueError: Weights for model sequential have not yet been created. Weights are created when the Model is first called on inputs or build() is called with an input_shape.

PermissionError in tuner.search() (when consulting usage of partitions)

At some place, tuner.search gets the usage information for different partitions.
I get a PermissionError in one of the iterations.

My workaround was a try-except, assigning arbitrary memory values (assuming an usage of 100%). It is not elegant at all.
Traceback below

Traceback (most recent call last):
  File "mlp_1_search.py", line 135, in <module>
    tuner.search(train_gen, validation_data=test_gen)
  File "lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 223, in search
    self.run_trial(trial, hp, fit_args, fit_kwargs)
  File "lib/python3.6/site-packages/kerastuner/tuners/hyperband.py", line 326, in run_trial
    super(Hyperband, self).run_trial(trial, hp, fit_args, fit_kwargs)
  File "lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 261, in run_trial
    model.fit(*fit_args, **fit_kwargs)
  File "lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 643, in fit
    use_multiprocessing=use_multiprocessing)
  File "lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 604, in fit
    steps_name='steps_per_epoch')
  File "lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 293, in model_iteration
    callbacks._call_batch_hook(mode, 'end', step, batch_logs)
  File "lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py", line 232, in _call_batch_hook
    batch_hook(batch, logs)
  File "lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py", line 515, in on_train_batch_end
    self.on_batch_end(batch, logs=logs)
  File "lib/python3.6/site-packages/kerastuner/engine/tuner_utils.py", line 91, in on_batch_end
    self.tuner.on_batch_end(self.execution, self.model, batch, logs)
  File "lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 293, in on_batch_end
    self._display.on_batch_end(execution, model, batch, logs=logs)
  File "lib/python3.6/site-packages/kerastuner/engine/tuner_utils.py", line 184, in on_batch_end
    host_status = self.host.get_status()
  File "lib/python3.6/site-packages/kerastuner/abstractions/host.py", line 93, in get_status
    status['disk'] = self._get_disk_usage()
  File "lib/python3.6/site-packages/kerastuner/abstractions/host.py", line 198, in _get_disk_usage
    usage = psutil.disk_usage(name)
  File "lib/python3.6/site-packages/psutil/__init__.py", line 2121, in disk_usage
    return _psplatform.disk_usage(path)
  File "lib/python3.6/site-packages/psutil/_psposix.py", line 131, in disk_usage
    st = os.statvfs(path)
PermissionError: [Errno 13] Permission denied: '/media/javier/cordon'

Does the tune_new_entries parameter really do what the documentation says?

Summary

Hi,
I have been training HyperResNet using the Hyperband tuner, and I have noticed that when I set the tune_new_entries parameter to True, the parameters which I did not specify manually are not even listed in the tuner's output. After reading the documentation, I was the under impression that the tuner is supposed to add those parameters to the search space and tune them.

On the other hand, when I set the tune_new_entries to False, the tuner lists all parameters of the HyperResNet and chooses a different value every time I run it.

Steps to reproduce the problem

from tensorflow import keras

fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

train_images = train_images / 255.0
test_images = test_images / 255.0

train_images = train_images.reshape(len(train_images), 28, 28, 1)
test_images = test_images.reshape(len(test_images), 28, 28, 1)

from keras.utils import to_categorical
train_labels_binary = to_categorical(train_labels)

from kerastuner.applications import HyperResNet
from kerastuner.tuners import Hyperband

hypermodel = HyperResNet(input_shape=(28, 28, 1), classes=10)

from kerastuner import HyperParameters
hp = HyperParameters()
hp.Choice('learning_rate', values=[1e-3, 1e-4])
hp.Fixed('optimizer', value='adam')

tuner = Hyperband(
    hypermodel,
    objective='val_accuracy',
    hyperparameters=hp,
    tune_new_entries=True,
    max_trials=20,
    directory='FashionMnistResNet',
    project_name='FashionMNIST')

tuner.search(train_images, train_labels_binary, validation_split=0.1)

Expected results

The list of tuned arguments printed to the output should also contain values not explicitly specified as the instance of HyperParameters, because according to the documentation tune_new_entries=False prevents unlisted parameters from being tuned. I set it to True, so I expect the unlisted parameters to be tuned. That does not seem to happen.

Actual results

This is printed:

Hp values:
|-learning_rate: 0.0001
|-optimizer: adam
|-tuner/epochs: 3

When I set tune_new_entries=False, it prints this:

|-learning_rate: 0.0001
|-optimizer: adam
|-pooling: max
|-tuner/epochs: 3
|-v2/conv3_depth: 4
|-v2/conv4_depth: 36
|-version: v2

I guess it should be the other way around.

Versions

Keras: 2.2.4-tf
Tensorflow: 2.0.0-beta1
Keras tuner: installed when cfc6e20956cb8554ee29ef2a1ba4635da7d0228b commit was the most recent one

TypeError: 'HTML' object is not subscriptable

When i run the MNIST example get this error after tuner.search:

TypeError Traceback (most recent call last)
in
4 y=y,
5 epochs=3,
----> 6 validation_data=(val_x, val_y))
7

~\Anaconda3\lib\site-packages\kerastuner\engine\tuner.py in search(self, *fit_args, **fit_kwargs)
222 self.on_trial_begin(trial)
223 self.run_trial(trial, hp, fit_args, fit_kwargs)
--> 224 self.on_trial_end(trial)
225 self.on_search_end()
226

~\Anaconda3\lib\site-packages\kerastuner\engine\tuner.py in on_trial_end(self, trial)
375 objective=self.objective,
376 remaining_trials=self.remaining_trials,
--> 377 max_trials=self.max_trials)
378
379 def on_search_end(self):

~\Anaconda3\lib\site-packages\kerastuner\engine\tuner_utils.py in on_trial_end(self, averaged_metrics, best_metrics, objective, remaining_trials, max_trials)
137 row = display.colorize_row(row, 'red')
138 rows.append(row)
--> 139 display.display_table(rows)
140
141 # Tuning budget exhausted

~\Anaconda3\lib\site-packages\kerastuner\abstractions\display.py in display_table(rows, title, indent)
372 out.append(indent + line)
373 table = "\n".join(out)
--> 374 display(table)
375
376

~\Anaconda3\lib\site-packages\IPython\core\display.py in init(self, data, url, filename, metadata)
691 return prefix.startswith("<iframe ") and suffix.endswith("</iframe>")
692
--> 693 if warn():
694 warnings.warn("Consider using IPython.display.IFrame instead")
695 super(HTML, self).init(data=data, url=url, filename=filename, metadata=metadata)

~\Anaconda3\lib\site-packages\IPython\core\display.py in warn()
687 # long string and we're only interested in its beginning and end.
688 #
--> 689 prefix = data[:10].lower()
690 suffix = data[-10:].lower()
691 return prefix.startswith("<iframe ") and suffix.endswith("</iframe>")

TypeError: 'HTML' object is not subscriptable

Cannot import Hyperband from kerastuner

Hi everyone,

I am on tensorflow-2.0.0-beta1 and Google Colab with the GPU environment turned on. Here is the minimal set of ways to reproduce what I am getting as errors:

I tried to install kerastuner on Google Colab using ** !pip install kerastuner** from my empirical intuition and I also tried !pip install keras-tuner from (https://pypi.org/project/keras-tuner/).
Both the times the library gets successfully installed. I did restart the runtime after the installations were complete
When I do a from kerastuner.tuners import Hyperband or from kerastuner import Hyperband. I got the following errors.

" ImportError: cannot import name 'Hyperband' "

" ModuleNotFoundError: No module named 'kerastuner.tuners' "

Looking forward to hear from the team.

ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\site-packages\tensorflow\python\platform\self_check.py in preload_check()
     61         try:
---> 62           ctypes.WinDLL(build_info.nvcuda_dll_name)
     63         except OSError:

~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\ctypes\__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error)
    347         if handle is None:
--> 348             self._handle = _dlopen(self._name, mode)
    349         else:

OSError: [WinError 126] The specified module could not be found

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
 in 
----> 1 from tensorflow.keras.utils import to_categorical
      2 from tensorflow.keras.datasets import cifar10
      3 from kerastuner.applications import HyperResNet
      4 from kerastuner import RandomSearch

~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\site-packages\tensorflow\__init__.py in 
     26 
     27 # pylint: disable=g-bad-import-order
---> 28 from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
     29 from tensorflow.python.tools import module_util as _module_util
     30 

~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\site-packages\tensorflow\python\__init__.py in 
     47 import numpy as np
     48 
---> 49 from tensorflow.python import pywrap_tensorflow
     50 
     51 # Protocol buffers

~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in 
     28 # Perform pre-load sanity checks in order to produce a more actionable error
     29 # than we get from an error during SWIG import.
---> 30 self_check.preload_check()
     31 
     32 # pylint: disable=wildcard-import,g-import-not-at-top,unused-import,line-too-long

~\AppData\Local\Continuum\anaconda3\envs\Dont touch me\lib\site-packages\tensorflow\python\platform\self_check.py in preload_check()
     68               "'C:\\Windows\\System32'. If it is not present, ensure that you "
     69               "have a CUDA-capable GPU with the correct driver installed."
---> 70               % build_info.nvcuda_dll_name)
     71 
     72       if hasattr(build_info, "cudart_dll_name") and hasattr(

ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.

Choice is limited to int, float, str, or bool

TypeError: A `Choice` can contain only `int`, `float`, `str`, or `bool`, found values: [None]with types: {<class 'NoneType'>}

I wish I could use other types, for example I may have a list for choice like [None, Path(model_path), Path(model_path2)].
or another case where [{},{},{}]
Is there a reason for restricting to these specific types?

Loading Best Model from File

The documentation clearly explains the procedure for loading the best model after hypereparameter optimization is complete.

models = tuner.get_best_models(num_models=2)

Also the metrics/ predictions can be obtained with:
# Evaluate the best model. loss, accuracy = best_model.evaluate(x_val, y_val)

However, how do you load a pre-tuned model from file and how to get the best model to make predictions?

Search fails on Windows machine with CD drive: [WinError21]

Hello,

Running keras-tuner with tf-nightly-gpu-2.0-preview 2.0.0.dev20190718 on Windows 10 through a Jupyter server. RandomSearch fails with error code [WinError21]. This is the traceback:

`PermissionError                           Traceback (most recent call last)
 in 
      4         batch_size=32,
      5         callbacks=[rlr_callback, earlystop_callback],
----> 6 	validation_data=(transformed_test[0], transformed_test[1]))

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\engine\tuner.py in search(self, *fit_args, **fit_kwargs)
    219             self.trials.append(trial)
    220             self.on_trial_begin(trial)
--> 221             self.run_trial(trial, hp, fit_args, fit_kwargs)
    222             self.on_trial_end(trial)
    223         self.on_search_end()

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\engine\tuner.py in run_trial(self, trial, hp, fit_args, fit_kwargs)
    257             fit_kwargs['callbacks'] = self._inject_callbacks(
    258                 original_callbacks, trial, execution)
--> 259             model.fit(*fit_args, **fit_kwargs)
    260             self.on_execution_end(trial, execution, model)
    261 

~\Anaconda3\envs\jupyter-server\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    698         max_queue_size=max_queue_size,
    699         workers=workers,
--> 700         use_multiprocessing=use_multiprocessing)
    701 
    702   def evaluate(self,

~\Anaconda3\envs\jupyter-server\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
    667         validation_steps=validation_steps,
    668         validation_freq=validation_freq,
--> 669         steps_name='steps_per_epoch')
    670 
    671   def evaluate(self,

~\Anaconda3\envs\jupyter-server\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    397         # Callbacks batch end.
    398         batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 399         callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
    400         progbar.on_batch_end(batch_index, batch_logs)
    401 

~\Anaconda3\envs\jupyter-server\lib\site-packages\tensorflow_core\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
    231     for callback in self.callbacks:
    232       batch_hook = getattr(callback, hook_name)
--> 233       batch_hook(batch, logs)
    234     self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
    235 

~\Anaconda3\envs\jupyter-server\lib\site-packages\tensorflow_core\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
    514     """
    515     # For backwards compatibility.
--> 516     self.on_batch_end(batch, logs=logs)
    517 
    518   def on_test_batch_begin(self, batch, logs=None):

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\engine\tuner_utils.py in on_batch_end(self, batch, logs)
     89 
     90     def on_batch_end(self, batch, logs=None):
---> 91         self.tuner.on_batch_end(self.execution, self.model, batch, logs)
     92 
     93     def on_epoch_end(self, epoch, logs=None):

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\engine\tuner.py in on_batch_end(self, execution, model, batch, logs)
    284             execution.per_batch_metrics.update(name, float(value))
    285         self._checkpoint_execution(execution)
--> 286         self._display.on_batch_end(execution, model, batch, logs=logs)
    287 
    288     def on_epoch_end(self, execution, model, epoch, logs=None):

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\engine\tuner_utils.py in on_batch_end(self, execution, model, batch, logs)
    182         # create bar desc with updated statistics
    183         description = ''
--> 184         host_status = self.host.get_status()
    185         if len(host_status['gpu']):
    186             gpu_usage = [float(gpu['usage']) for gpu in host_status['gpu']]

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\abstractions\host.py in get_status(self, no_cach)
     92             status['gpu'] = self._get_gpu_usage()
     93             status['uptime'] = self._get_uptime()
---> 94             status['disk'] = self._get_disk_usage()
     95             status['software'] = self.software
     96             status['hostname'] = self.hostname

~\Anaconda3\envs\jupyter-server\lib\site-packages\kerastuner\abstractions\host.py in _get_disk_usage(self)
    190         for partition in self.partitions:
    191             name = partition.mountpoint
--> 192             usage = psutil.disk_usage(name)
    193             info = {
    194                 "name": name,

~\Anaconda3\envs\jupyter-server\lib\site-packages\psutil\__init__.py in disk_usage(path)
   2119     plus the percentage usage.
   2120     """
-> 2121     return _psplatform.disk_usage(path)
   2122 
   2123 

~\Anaconda3\envs\jupyter-server\lib\site-packages\psutil\_pswindows.py in disk_usage(path)
    291         # to fail immediately. After all we are accepting input here...
    292         path = path.decode(ENCODING, errors="strict")
--> 293     total, free = cext.disk_usage(path)
    294     used = total - free
    295     percent = usage_percent(used, total, round_=1)

PermissionError: [WinError 21] Le périphérique nest pas prêt: 'E'`

On my machine, 'E' is the CD drive.

This seems related to asking for disk usage through psutil for all drives indiscriminately. The error can be reproduced with the following on a machine with a cd drive:

import psutil

for partition in psutil.disk_partitions():
    name=partition.mountpoint
    usage=psutil.disk_usage(name)

Fails with:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~\Anaconda3\envs\jupyter-server\lib\site-packages\psutil\__init__.py", line 2121, in disk_usage
    return _psplatform.disk_usage(path)
  File "~\Anaconda3\envs\jupyter-server\lib\site-packages\psutil\_pswindows.py", line 293, in disk_usage
    total, free = cext.disk_usage(path)
PermissionError: [WinError 21] Le périphérique nest pas prêt: 'E'

I propose a fix which is to ignore removable drives when getting disk usage statistics:

Add the following between lines 190 and 191 of kerastuner/abstractions/host.py :

if partition.opts.upper() in ('CDROM', 'REMOVABLE'):
    continue

Please support RL and custom training loops

To "pull" hyperparameters is a nice way to write less code, I really like the API for this repo, but we use a custom GradientTape training loop for multiple different multi-step tasks, model.compile/model.fit doesnt work for us

how do you use this repo for custom training loops?

Non-sparse categorical_accuracy loss causes best model to never update

I'm trying to optimize my own take on the MNIST classification problem and for some reason the best model is never updated from the initial one, even when the the categorical_accuracy metric has reached better values.

Here's an example statistics printout (val_categorial_accuracy is highlighted in red):

┌──────────────────────────┬────────────┬───────────────┐
│ Name                     │ Best model │ Current model │
├──────────────────────────┼────────────┼───────────────┤
│ categorical_accuracy     │ 0.9107     │ 0.9598        │
│ loss                     │ 0.3016     │ 0.1321        │
│ val_categorical_accuracy │ 0.9726     │ 0.9846        │
│ val_loss                 │ 0.094      │ 0.0509        │
└──────────────────────────┴────────────┴───────────────┘

However when remodelling the model to use sparse_categorical_accuracy (as is used in the example script), everything magically works just fine and better models are being selected again.

Can't create an HP inside of a Layer's `call`

This gives an error in deserialize_keras_object about not recognizing Float (likely something to do with how module_objects are being picked up):

from tensorflow import keras
import kerastuner
import numpy as np

def build_model(hp):
    class Bias(keras.layers.Layer):
        def call(self, x):
            return x + hp.Float('bias', 0, 2, step=0.5)

    layer = Bias(input_shape=(1,))
    model = keras.Sequential([layer])
    model.compile('sgd', 'mse', metrics=['accuracy'])
    return model

x = np.zeros((10, 1))
y = np.ones((10, 1))

rs = kerastuner.tuners.RandomSearch(build_model, 'val_accuracy', 10)
rs.search(x, y, epochs=1, validation_data=(x, y))

Installation issues

Hi. I am really excited to try kerastuner since hyperparameter tuning is something I really love to play with in deep learning. However, due to lack of installation guide, I could not proceed any further other than just loading and preprocessing the dataset.

I am on tensorflow-2.0.0-beta0 and Google Colab with the GPU environment turned on. Here is the minimal set of ways to reproduce what I am getting as errors:

  • I tried to install kerastuner using pip install kerastuner (this was from my empirical intuition) pip install keras-tuner (https://pypi.org/project/keras-tuner/).
  • Both the times the library gets successfully installed.
  • When I do a from kerastuner.tuners import GridSearch I get the attached error stack (I did restart the runtime after the installations were complete).

Would love to hear from the team.

Trace: trace.txt.zip

Extracting history from best trained model and viewing progress

Hello all,

I was just wondering if there is a way of viewing the history of the best model and using this to plot loss and accuracy and metrics so we can visualise exactly how the model has progressed when selecting the best model?

I want to be sure that the model is not overfitting so visualising this would be very helpful.

Many thanks!

Amir

Add TPE Optimizer

There is a similar library called hyperopt and the main optimizer included in that is TPE (Tree-structured Parzen Estimator) as described by Bergstra et al. They also provide a fairly comprehensive set of distributions to optimize over that might be inspiring.

Would it be possible to get that optimiser implemented here? I've used it with keras through another library with very good results.

tf < 2.0 compatibility

Are there any specific issues with keras-tuner on tf < 2.0 ?
I previously used it with the latest tf but had to downgrade due to many compatibility issues...
It seems to be working on tf 1.14, would it be safe to continue using it?

TypeError when sorting candidates during Hyperband search

The following simple toy example fails with the error message shown below. When using the RandomSearch tuner instead, everything works as expected.

def build_model(hp):
    inputs = layers.Input(shape=(5, ))
    x = layers.Dense(units=hp.Range('units', min_value=32, max_value=512, step=32),
                         activation='relu')(inputs)
    predictions = layers.Dense(1)(x)
    model = keras.models.Model(inputs=inputs, outputs=predictions)
    model.compile(optimizer='adam', loss='mean_squared_error')
    return model

tuner = kerastuner.tuners.Hyperband(
    build_model,
    objective='val_loss',
    max_trials=2,
    executions_per_trial=1
)

tuner.search(np.eye(5), np.ones((5, 1)),
             validation_data=(np.eye(5), np.ones((5, 1))),
             epochs=2)
1/2 trials left
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-c7537b51ff32> in <module>
     19 tuner.search(np.eye(5), np.ones((5, 1)),
     20              validation_data=(np.eye(5), np.ones((5, 1))),
---> 21              epochs=2)

/opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in search(self, *fit_args, **fit_kwargs)
    207             # Obtain unique trial ID to communicate with the oracle.
    208             trial_id = tuner_utils.generate_trial_id()
--> 209             hp = self._call_oracle(trial_id)
    210             if hp is None:
    211                 # Oracle triggered exit

/opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in _call_oracle(self, trial_id)
    525         # Obtain hp value suggestions from the oracle.
    526         while 1:
--> 527             oracle_answer = self.oracle.populate_space(trial_id, hp.space)
    528             if oracle_answer['status'] == 'RUN':
    529                 hp.values = oracle_answer['values']

/opt/conda/lib/python3.6/site-packages/kerastuner/tuners/hyperband.py in populate_space(self, trial_id, space)
     86         if self._bracket_index + 1 < self._num_brackets:
     87             self._bracket_index += 1
---> 88             self._select_candidates()
     89         # If the current band ends
     90         else:

/opt/conda/lib/python3.6/site-packages/kerastuner/tuners/hyperband.py in _select_candidates(self)
    135     def _select_candidates(self):
    136         sorted_candidates = sorted(list(range(len(self._candidates))),
--> 137                                    key=lambda i: self._candidate_score[i])
    138         num_selected_candidates = self._model_sequence[self._bracket_index]
    139         for index in sorted_candidates[:num_selected_candidates]:

TypeError: '<' not supported between instances of 'NoneType' and 'float'

The code was run with Python 3.6.6 and the following relevant libraries:

tensorflow                         2.0.0b1      
Keras-Tuner                        0.9.0.1562790722 
numpy                              1.16.4 

pseudo genetic search

Hello, I found this package very interesting, especially the way how the parameters are integrated inside model creation/design. So far there is implemented bayesian, hyperband, random search... Recently we found interesting to use kind of genetic search where you are mixing the best parameters together and randomly change some of them... Would you be interested in having something like sklearn-genetic in this package, creating a PR? Thx

Confusing (and incorrect) results_summary and error for weight initilisation

Hi all,

I was wondering if someone could clarify a few things for me, i would be very greatful! :)

I have run keras tuner with the following code in order to optimise a model by unit number and layer number:

def build_model(hp):
    model = keras.Sequential()
    for i in range(hp.Int('num_layers', 2, 20)):
        model.add(layers.Dense(units=hp.Int('units_' + str(i),
                                            min_value=32,
                                            max_value=512,
                                            step=32),
                               activation='relu'))
    model.add(layers.Dense(24, activation='sigmoid'))
    model.compile(
        optimizer=keras.optimizers.Adam(
            hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])),
        loss='binary_crossentropy',
        metrics=['accuracy'])
    return model

tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=5,
    executions_per_trial=5,
    directory='/Users/blablabla/Desktop/',
    project_name='Optimal model')

tuner.search_space_summary()

tuner.search(X_train.values, y_train,
             epochs=50,
             validation_data=(X_test.values, y_test),
             shuffle=True
             )

This code runs without any issue! however.. when calling tuner.results_summary()

The results summary are completely of the actual training results (both during hyperparameter training but also if i just train the model without the hyperparamater (normal keras))

tuner.results_summary=
[Results summary]
 |-Results in /Users/jassim01/Desktop/CRUK Cambridge/R/Amir/Optimal model
 |-Ran 5 trials
 |-Ran 25 executions (5 per trial)
 |-Best val_accuracy: **0.5650**

the best val_acc is 0.56 and this is completely wrong according to the best performance i can achieve when running keras alone, or during the hyperparameter training. during hyperparameter training I am achieving (according to hyperparameter tuning progress report in excess of 90%[see one line as example])

is this is a bug or am i misintepreting something?

[CPU: 43%]Epoch 28/50: 100%|██████████| 5/5 [00:00<00:00, 61.46steps/s, loss=0.09, accuracy=0.969, val_loss=0.184, val_accuracy=0.913

Secondly, when calling best_model:

best_model = tuner.get_best_models(num_models=1)[0]

I get the error: ValueError: Weights for model sequential_2 have not yet been created. Weights are created when the Model is first called on inputs or build()

I understand this error, however none of the tutorials call model.build argument at the end of the for loop in the examples. Since the model is actually able to train, is this a bug?

if not.. is it ok to add an input_dim inside the for loop as follows?

model.add(layers.Dense(input_dim=5078, units=hp.Int('units_' + str(i),
min_value=32,
max_value=512,
step=32),.. etc

If i do this, tuner.results_summary()
shows a best val_acc of 0.90... and get_bestmodel also works....

just want to confirm this is correct..

Thanks for your help!!

RuntimeError: Model-building function did not return a valid Keras Model instance

Hi,

I followed exactly the tutorial and I got this error ...

Input :

import keras
from keras import layers
from kerastuner.tuners import RandomSearch

def build_model(hp):
    model = keras.Sequential()
    model.add(layers.Dense(units=hp.Int('units',
                                        min_value=32,
                                        max_value=512,
                                        step=32),
                           activation='relu'))
    model.add(layers.Dense(10, activation='softmax'))
    model.compile(
        optimizer=keras.optimizers.Adam(
            hp.Choice('learning_rate',
                      values=[1e-2, 1e-3, 1e-4])),
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy'])
    return model

tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=5,
    executions_per_trial=3,
    directory='my_dir',
    project_name='helloworld')

Output :

RuntimeError: Model-building function did not return a valid Keras Model instance, found <keras.engine.sequential.Sequential object at 0x7f7209a72950>

Help please ?

Callback after each trial

In the same spirit as for tf.keras.callbacks, it would be nice to callbacks called at the end of each trial and/or executions.

My use case would be to send a notification to myself but I am sure plenty of other use-cases exist.

I can draft a PR if interested.

BayesianOptimization Error

I test a code as the following:

from kerastuner.tuners import BayesianOptimization

tuner = BayesianOptimization(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='test_dir')

tuner.search_space_summary()

tuner.search(x=x,
y=y,
epochs=3,
validation_data=(val_x, val_y))

tuner.results_summary()

Then, I got the following error:


AttributeError Traceback (most recent call last)
in ()
13 y=y,
14 epochs=3,
---> 15 validation_data=(val_x, val_y))
16
17 tuner.results_summary()

5 frames
/usr/local/lib/python3.6/dist-packages/kerastuner/tuners/bayesian.py in save(self, fname)
82 'score': self._score,
83 'values': self._values,
---> 84 'x': self._x.tolist(),
85 'y': self._y.tolist(),
86 }

AttributeError: 'NoneType' object has no attribute 'tolist'

Can I get a simple example using "BayesianOptimization" ?

HTML expects text

strange error

When I execute:
tuner.search(x_train, # Данные для обучения
y_train, # Правильные ответы
batch_size=256, # Размер мини-выборки
epochs=5, # Количество эпох обучения
validation_split=0.2, # Часть данных, которая будет использоваться для проверки
verbose=0
)

Link on colab with example:
https://colab.research.google.com/drive/1nd3aA4BdFbncb4U2zVzwJ0Npqcc14-hA

HyperbandOracle needs call to result() to mark trial as ended.

HyperbandOracle marks the trail as ended in HyperbandOracle.result() and records the score. This function is part of the Oracle interface but never actually called from Tuner. A possible place to call self.oracle.result() could be Tuner.on_trial_end().

results_summary() not showing hyperparameter values

Hi,

I am using keras-tuner at commit 7f6b00f45c6e0b0debaf183fa5f9dcef824fb02f.

I run RandomSearch tuner in Google Colab Notebook. Calling results_summary() gives me the following output:

|-Results in test_dir/tune_nn
|-Showing 10 best trials
|-Objective: Objective(name='val_accuracy', direction='max') Score: 0.8007448315620422
|-Objective: Objective(name='val_accuracy', direction='max') Score: 0.7988826632499695
|-Objective: Objective(name='val_accuracy', direction='max') Score: 0.774674117565155
|-Objective: Objective(name='val_accuracy', direction='max') Score: 0.77094966173172
|-Objective: Objective(name='val_accuracy', direction='max') Score: 0.5977653861045837

This comment suggests that this should also display the hyperparameters along with their values. Is this expected?

Currently, I have to extract the hyperparameters and their values with:

tuner.oracle.get_best_trials(num_trials=1)[0].hyperparameters.values

Thanks!

BayesianOptimization Error about indexing

I got a error at second trial owing to index.

Source Code

from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
(x, y), (val_x, val_y) = keras.datasets.mnist.load_data()
x = x.astype('float32') / 255.
val_x = val_x.astype('float32') / 255.
x = x[:10000]
y = y[:10000]

def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten(input_shape=(28, 28)))
for i in range(hp.Range('num_layers', 2, 20)):
model.add(layers.Dense(units=hp.Range('units_' + str(i), 32, 512, 32),
activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model

from kerastuner.tuners import BayesianOptimization
tuner = BayesianOptimization(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='test_dir')
tuner.search_space_summary()
tuner.search(x=x,
y=y,
epochs=3,
validation_data=(val_x, val_y))
tuner.results_summary()

Output


TypeError Traceback (most recent call last)
in ()
13 y=y,
14 epochs=3,
---> 15 validation_data=(val_x, val_y))
16
17 tuner.results_summary()

3 frames
/usr/local/lib/python3.6/dist-packages/kerastuner/tuners/bayesian.py in _get_training_data(self)
160 for name, value in values.items():
161 index = self._get_hp_index(name)
--> 162 hp = self.space[index]
163 if isinstance(hp, hp_module.Choice):
164 value = hp.values.index(value)

TypeError: list indices must be integers or slices, not NoneType

implement cross-validation

Would be nice to extend the idea of executions_per_trial to CV folds, reporting the average eval metric to the tuner.

hyperband doesn't work as intended

There sorting of the trial is incorrect for at least two reasons:

  1. line: 130 the values are not initialized so the comparison fail in line 136
  2. The sort in 136: sort in ascending order so the one with the lowest score are always picked up in line 139

So the init/sort should take into account the metric direction

hp.Int "max_value" not working as expected

While using keras-tuner, the hp.Int "max_value" does not appear to be reached.
E.g. I have the following but 'num_units' maxes-out at 96, not 128 as expected.
hp.Int('num_units',min_value=32,max_value=128,step=32)
If I change to the following I can reach 128:
hp.Int('num_units',min_value=32,max_value=128+1,step=32)

Sequential models may cause issues with get_best_models()

tuner.get_best_models() fails with Sequential models, with the following exception

Traceback (most recent call last):
  File "test_issue_74.py", line 85, in <module>
    # For 
  File "test_issue_74.py", line 70, in test_issue_74_reproduction
    _ = tuner.get_best_models()
  File "/usr/local/google/home/jamlong/git/keras-tuner/kerastuner/engine/tuner.py", line 413, in get_best_models
    model.load_weights(best_checkpoint)
  File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 182, in load_weights
    return super(Model, self).load_weights(filepath, by_name)
  File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1364, in load_weights
    self._assert_weights_created()
  File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1617, in _assert_weights_created
    self.name)
ValueError: Weights for model sequential_1 have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.

Conditional hyperparameter tuning bug

I'm using Keras-Tuner to run trials on a multi-layer NN with variable number of layer and units within each layer, similar to the example in the README:

for i in range(hp.Int('num_layers', 2, 20)):
        model.add(layers.Dense(units=hp.Int('units_' + str(i),
                                            min_value=32,
                                            max_value=512,
                                            step=32),
                               activation='relu'))

The "units_#" hyperpameter should be conditional upon "num_layer" hyperparameter. E.g.if "num_layers=2" then I should see "units_0" and "units_1". However in my testing I'm not seeing proper correlation (num_layers doesn't match the number of units_# hyperparameter values set). Instead I see something like the following:

[Trial summary]

Hp values:
|-num_fc_layers: 2
|-num_units_0: ...
|-num_units_1: ...
|-num_units_2: ...
|-num_units_3: ...
|-num_units_4: ..

or

[Trial summary]

Hp values:
|-num_fc_layers: 5
|-num_units_0: ...
|-num_units_1: ...
|-num_units_2: ...

This effectively makes the summary of hyperparameters used in a trial useless.
I did some debugging of the code but haven't found the culprit yet.
I'm using "randomsearch" tuner and wrapped my model build in HyperModel class (rather than function method).

Could someone please take a look? Thank you.

Tuner stopping unexpectedly with "Oracle triggered exit"

I am using keras-tuner with tf2.0_beta version. Its works perfectly for small number (5-10) of trials.
But when I use a large number of trials, it stops.

bs = batch_sizes[0]
tuner = RandomSearch(mymodel,
objective='val_accuracy',
max_trials=60,
executions_per_trial=3,
directory='./multiclass_classifier/training',
project_name='search_bs{}'.format(bs))

tuner.search(x=ds_train['pixels'], y=train_labels, batch_size=bs, epochs=10,
validation_data=(ds_val['pixels'], val_labels))

The tuner optimizes for around 13 trials and then exit showing this error "Oracle triggered exit".
I tried running it but it exists every time.

Screenshot from 2019-07-18 11-02-10

Bayesian and Hyperband Oracles ignore `step` for Float

from tensorflow import keras
import kerastuner
import numpy as np

def build_model(hp):
    bias = hp.Float('bias', 0, 2, step=0.5)
    if bias not in {0, 0.5, 1, 1.5, 2}:
        raise ValueError('Found bias: ' + str(bias))

    class Bias(keras.layers.Layer):
        def call(self, x):
            return x + bias

    layer = Bias(input_shape=(1,))
    model = keras.Sequential([layer])
    model.compile('sgd', 'mse', metrics=['accuracy'])
    return model

x = np.zeros((10, 1))
y = np.ones((10, 1))

# Random search runs fine.
rs = kerastuner.tuners.RandomSearch(build_model, 'val_accuracy', 10)
rs.search(x, y, epochs=1, validation_data=(x, y))
# Bayesian returns floats w/o bucketizing, errors out.
b = kerastuner.tuners.BayesianOptimization(build_model, 'val_accuracy', 10)
b.search(x, y, epochs=1, validation_data=(x, y))
# So does hyperband.
hb = kerastuner.tuners.BayesianOptimization(build_model, 'val_accuracy', 10)
hb.search(x, y, epochs=1, validation_data=(x, y))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.