Giter Site home page Giter Site logo

keras-io's Introduction

Keras.io documentation generator

This repository hosts the code used to generate the keras.io website.

Generating a local copy of the website

pip install -r requirements.txt
# Update Keras version to 3
pip install keras==3.0.2
cd scripts
python autogen.py make
python autogen.py serve

If you have Docker (you don't need the gpu version of Docker), you can run instead:

docker build -t keras-io . && docker run --rm -p 8000:8000 keras-io

It will take a while the first time because it's going to pull the image and the dependencies, but on the next times it'll be much faster.

Another way of testing using Docker is via our Makefile:

make container-test

This command will build a Docker image with a documentation server and run it.

Call for examples

Are you interested in submitting new examples for publication on keras.io? We welcome your contributions! Please read the information below about adding new code examples.

We are currently interested in the following examples.

Fixing something in an existing code example

Fixing typos

If your fix is very simple, please send out a PR simultaneously updating the .py, the .md, and the .ipynb files for the example.

More extensive fixes

For larger fixes, please send a PR that only includes the .py file, so we only update the other two files once the code has been reviewed and approved.

Adding a new code example

Keras code examples are implemented as tutobooks.

A tutobook is a script available simultaneously as a notebook, as a Python file, and as a nicely-rendered webpage.

Its source-of-truth (for manual edition and version control) is its Python script form, but you can also create one by starting from a notebook and converting it with the command nb2py.

Text cells are stored in markdown-formatted comment blocks. the first line (starting with """) may optionally contain a special annotation, one of:

  • shell: execute this block while prefixing each line with !.
  • invisible: do not render this block.

The script form should start with a header with the following fields:

Title: (title)
Author: (could be `Authors`: as well, and may contain markdown links)
Date created: (date in yyyy/mm/dd format)
Last modified: (date in yyyy/mm/dd format)
Description: (one-line text description)
Accelerator: (could be GPU, TPU, or None)

To see examples of tutobooks, you can check out any .py file in examples/ or guides/.

Creating a new example starting from a ipynb file

  1. Save the ipynb file to local disk.
  2. Convert the file to a tutobook by running: (assuming you are in the scripts/ directory)
python tutobooks.py nb2py path_to_your_nb.ipynb ../examples/vision/script_name.py

This will create the file examples/vision/script_name.py.

  1. Open it, fill in the headers, and generally edit it so that it looks nice.

NOTE THAT THE CONVERSION SCRIPT MAY MAKE MISTAKES IN ITS ATTEMPTS TO SHORTEN LINES. MAKE SURE TO PROOFREAD THE GENERATED .py IN FULL. Or alternatively, make sure to keep your lines reasonably-sized (<90 char) to start with, so that the script won't have to shorten them.

  1. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  2. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  3. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Creating a new example starting from a Python script

  1. Format the script with black: black script_name.py
  2. Add tutobook header
  3. Put the script in the relevant subfolder of examples/ (e.g. examples/vision/script_name)
  4. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  5. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  6. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Previewing a new example

You can locally preview what the example looks like by running:

cd scripts
python autogen.py add_example vision/script_name

(Assuming the tutobook file is examples/vision/script_name.py.)

NOTE THAT THIS COMMAND WILL ERROR OUT IF ANY CELLS TAKES TOO LONG TO EXECUTE. In that case, make your code lighter/faster. Remember that examples are meant to demonstrate workflows, not train state-of-the-art models. They should stay very lightweight.

Then serving the website:

python autogen.py make
python autogen.py serve

And navigating to 0.0.0.0:8000/examples.

Read-only autogenerated files

The contents of the following folders should not be modified by hand:

  • site/*
  • sources/*
  • templates/examples/*
  • templates/guides/*
  • examples/*/md/*, examples/*/ipynb/*, examples/*/img/*
  • guides/md/*, guides/ipynb/*, guides/img/*

Modifiable files

These are the only files that should be edited by hand:

  • templates/*.md, with the exception of templates/examples/* and templates/guides/*
  • examples/*/*.py
  • guides/*.py
  • theme/*
  • scripts/*.py

keras-io's People

Contributors

0xrushi avatar 8bitmp3 avatar aakashkumarnain avatar abheesht17 avatar apoorvnandan avatar arig23498 avatar divyashreepathihalli avatar fchollet avatar gabrieldemarmiesse avatar grasskin avatar haifeng-jin avatar hertschuh avatar jbischof avatar ksalama avatar kurianbenoy avatar lukewood avatar markdaoust avatar mattdangerw avatar mohantym avatar nkovela1 avatar qlzh727 avatar sachinprasadhs avatar sampathweb avatar sayakpaul avatar sitamgithub-msit avatar soumik12345 avatar suryanarayanay avatar swghosh avatar tilakrayal avatar yashk2810 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-io's Issues

NotImplementedError in examples/generative/vae.py

Dear all,

when running the script examples/generative/vae.py without any changes, I get this error message

raise NotImplementedError('When subclassing theModelclass, you should' NotImplementedError: When subclassing theModelclass, you should implement acallmethod.

I have python3.6.9 and tensorflow-gpu 1.12.0, but I think this might be a general problem with this script.

Thanks a lot for your answers.

Best,

Wilhelm

Old documentation style was better on the eyes.

New style documentation problems:

  1. All page is pure white without light gray and dark gray elements as was in the old style.
  2. All the way "black" code blocks. Just look at YouTube's nice gray color (In the dark mode of course).

If the new style is going to be as is, please add dark or dim theme.

Timeseries anomaly detection using an Autoencoder - Using Conv1DTranspose

Hi,

is there any workaround for using Conv1DTranspose in the example because it is not in the stable version?
I tried to model it with Conv2DTranspose but it did not work.
Installing tf-nightly throws me an error: " ERROR: Could not find a version that satisfies the requirement tf-nightly (from versions: none)ERROR: No matching distribution found for tf-nightly" (Windows 10)

[Guides] Pre-processing of images for Xception model

As a part of the Transfer Learning tutorial on the website, Xception model have been used.

Our raw images have a variety of sizes. In addition, each pixel consists of 3 integer
values between 0 and 255 (RGB level values). This isn't a great fit for feeding a
neural network. We need to do 2 things:
- Standardize to a fixed image size. We pick 150x150.
- Normalize pixel values between 0 and 1. We'll do this using a `Rescaling` layer as
part of the model itself.

x = keras.layers.experimental.preprocessing.Rescaling(1.0 / 255.0)(
x
) # Scale inputs to [0. 1]

The preprocessing used to feed in data using the tf.data API as well as with the new Rescaling layer normalizes raw input pixels (0-255) into a range (0, 1). But, as per keras_applications.xception.preprocess_input,
(https://github.com/tensorflow/tensorflow/blob/476ec938b253a9479de09aab88dceec6f0a304ed/tensorflow/python/keras/applications/xception.py#L318-L320) the corresponding preprocess_input uses mode='tf' (https://github.com/tensorflow/tensorflow/blob/476ec938b253a9479de09aab88dceec6f0a304ed/tensorflow/python/keras/applications/imagenet_utils.py#L181-L184) which normalizes input pixels in the range (-1, 1) instead of (0, 1).

Sometimes calculating activations (for deep feature extraction step) from pre-trained weights are prone to these type of problems with different preprocessing input ranges. Is the example affected?

keras.layers' has no attribute 'Conv1DTranspose'

There was an error when I executed the program of Timeseries anomaly detection using an Autoencoder.
The error message is as follows:
AttributeError: module 'tensorflow.python.keras.api._v1.keras.layers' has no attribute 'Conv1DTranspose'

here is a link which has class Conv1DTranspose:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/layers/convolutional.py

but the below link that doesn't have Conv1DTranspose:
https://tensorflow.google.cn/api_docs/python/tf/keras/layers/

Please tell me what should I do to use Conv1DTranspose correctly?

thanks!

Questions on guide one "masking and padding with Keras"

Contributing doc on serving Keras model via REST API

Hi Keras team - I'm interested in contributing documentation on serving trained Keras models via REST API with bentoml/BentoML and deploying the model server to Kubernetes. BentoML is an open-source tool I've been working on, for high-performance model serving.

I'm thinking of contributing a guide similar to the documentation we are adding for FastAI here, and it will be based on one of the BentoML's Keras example notebooks: https://docs.bentoml.org/en/latest/examples.html#tensorflow-keras, e.g. the BentoML Keras example project: Movie Review Sentiment with BERT https://github.com/bentoml/gallery/blob/master/tensorflow/bert/bert_movie_reviews.ipynb

I noticed this is not something in the "call for contributions" list, so before I started, I would love to understand if the topic "model deployment / productionizing trained model" is something Keras team would be interested in adding to https://keras.io/guides/ and if so, where does it fit best in there.

BERT examples with/without HuggingFace?

Regarding examples around BERT, do we use HuggingFace transformers and tokenizers? Because then finetuning and downstream tasks can be shown in small examples like this.

# create model : Input > BERT > Dense
bert = TFBertModel.from_pretrained(config.MODEL_PATH, config=BertConfig())
input_ids = layers.Input(shape=(config.maxlen,), dtype=tf.int32)
sequence_output = bert(input_ids)[0][:, 0, :]
out = layers.Dense(1, activation="sigmoid")(sequence_output)
classifier = models.Model(inputs=input_ids, outputs=out)

classifier.compile(
    optimizers.Adam(lr=3e-5), loss="binary_crossentropy", metrics=[metrics.AUC()]
)
train_history = classifier.fit(train_dataset, steps_per_epoch=150, epochs=30)

And then we can show that doing the same thing on TPU is as easy as adding

tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)

and creating model under strategy.scope().

Does this feel like a good example?

AttributeError: module 'tensorflow.keras.preprocessing' has no attribute 'image_dataset_from_directory'

Following the sample doc here: https://github.com/keras-team/keras-io/blob/master/examples/vision/image_classification_from_scratch.py

and I'm getting the following error:
AttributeError: module 'tensorflow.keras.preprocessing' has no attribute 'image_dataset_from_directory'

tensorflow version:
2.2.0
keras version:
2.3.0-tf

I was looking at the source itself: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/keras/preprocessing
and there it looks good

but in my local I don't see image_dataset_from_directory, I have the up to date versions though.

Can anyone explain the gap?

I installed using pip on macOS

pip3.7 install -U tensorflow==2.2.0

[Retinanet] Some trivial questions, about tf ops and selecting the dataset

I am planning to make a notebook for retinanet, can you please let me know what dataset should I train on. Should I use a well known dataset or can it be a toy dataset thats easier to train on, like within 30 mins?
My another doubt is should I use the ops that are provided by keras only(tf.keras.backend) or using tf v2 ops is fine as well ?

Unable to call super(WGAN, self).compile() without providing optimizer in wgan-gp.py

I have implemented the wgan-gp.py almost exactly as described in https://keras.io/examples/generative/wgan_gp/

however running it gives the following error:
in compile, super(WGAN, self).compile()
TypeError: compile() missing 1 required positional argument: 'optimizer'

Additionally, if i add an arbitrary optimizer like optimizer='Adam', then during the wgan.fit() call it raises "compile() got unexpected keyword argument 'optimizer'"

How to load in my pre-trained BERT model?

How to load in my pre-trained BERT model?
I don't Know help me ...

I want change this code ...
File = keras-io/examples/nlp/text_extraction_with_bert.py

slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

This code is load "vocab.txt". but change how my pre-trained BERT vocab.
tokenizer = BertWordPieceTokenizer("bert_base_uncased/vocab.txt", lowercase=True)

This code is load pre-trained BERT and config.json. but change how my pre-trained BERT and my confing file
encoder = TFBertModel.from_pretrained("bert-base-uncased", force_download = True)

How to load in my pre-trained BERT model?

Adding more GANs to the examples

There are only examples of WGAN and DCGAN in Keras' examples. I would like to add more GANs like LSGAN, ACGAN, InfoGAN, etc. Please let me know if I can submit a pull request regarding this.

Errors while running autogen.py

After cloning the repo, installing the requirements and running: python autogen.py make I got this error:

Generating md sources
...Processing .
...Processing about
Traceback (most recent call last):
File "autogen.py", line 900, in
keras_io.make_md_sources()
File "autogen.py", line 108, in make_md_sources
self.make_md_source_for_entry(self.master, path_stack=[], title_stack=[])
File "autogen.py", line 510, in make_md_source_for_entry
self.make_md_source_for_entry(entry, path_stack[:], title_stack[:])
File "autogen.py", line 441, in make_md_source_for_entry
template = template_file.read()
File "C:\Users\Angel Melchor\AppData\Local\Programs\Python\Python37\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 6350: character maps to <undefined>

Trying to add an example with python autogen.py add_example throws:

Traceback (most recent call last):
File "autogen.py", line 907, in
working_dir=get_working_dir(sys.argv[3]) if len(sys.argv) == 4 else None,
File "autogen.py", line 233, in add_example
tutobooks.py_to_nb(py_path, nb_path, fill_outputs=False)
File "C:\Users\Angel Melchor\GitHub\keras-io\scripts\tutobooks.py", line 121, in py_to_nb
validate(py)
File "C:\Users\Angel Melchor\GitHub\keras-io\scripts\tutobooks.py", line 323, in validate
f = open(fpath, "w")
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/5020644.py'

Working on a fresh python 3.7.7 environment.

`ValueError: The model cannot be compiled because it has no loss to optimize.`

When I checked the tutorial Customizing what happens in fit() (really important and please go through this OFFICIAL tutorial) and ran the codes, I got the following error:

WARNING:tensorflow:Output dense_1 missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to dense_1.
Traceback (most recent call last):
  File "/Users/yaosiyuan/PycharmProjects/tensorflow/adversarial_training/customize.py", line 142, in <module>
    model.compile(optimizer="adam")
  File "/Users/yaosiyuan/.pyenv/versions/anaconda3-5.3.1/envs/tflow_dl/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/Users/yaosiyuan/.pyenv/versions/anaconda3-5.3.1/envs/tflow_dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 373, in compile
    self._compile_weights_loss_and_weighted_metrics()
  File "/Users/yaosiyuan/.pyenv/versions/anaconda3-5.3.1/envs/tflow_dl/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/Users/yaosiyuan/.pyenv/versions/anaconda3-5.3.1/envs/tflow_dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1653, in _compile_weights_loss_and_weighted_metrics
    self.total_loss = self._prepare_total_loss(masks)
  File "/Users/yaosiyuan/.pyenv/versions/anaconda3-5.3.1/envs/tflow_dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1752, in _prepare_total_loss
    raise ValueError('The model cannot be compiled '
ValueError: The model cannot be compiled because it has no loss to optimize.

I read the source code of tf.keras.Model, I found the class don't have a method called train_step and no codes invoking train_step.

Environtment:

python: Python 3.7.7
tensorflow: 2.0.0
keras: 2.2.4-tf

End-to-end OCR example contribution

The End-to-end OCR implementation seems to require some updates as per the call_for_contributions.md file.

@joshuacwnewton and I are working on this example currently, and this issue partially serves to confirm that no one else is working on this.

Although pull request #91 adds a new OCR example to Keras, the nuances between intentionally obfuscated texts (as those found in captchas) and a standard text OCR neural network that we are aiming to implement rely on different data and optimizations, therefore warranting a new OCR implementation.

Image classification from scratch example

Hi, I don't have much experience with Python, Tensorflow and Keras.
I wanted to learn more about keras by using example that got my attention.

I downloaded notebook from colab linked @ https://keras.io/examples/vision/image_classification_from_scratch/ and imported it in Jupyter based on official TF container with TF 2.1.0:

docker run --rm -it -p 8888:8888 tensorflow/tensorflow:latest-py3-jupyter

All i modified in this container was

apt-get install unzip

to allow downloading example image set.

Unfortunately on this step:

image_size = (180, 180)
batch_size = 32

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "PetImages",
    validation_split=0.2,
    subset="training",
    seed=1337,
    image_size=image_size,
    batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "PetImages",
    validation_split=0.2,
    subset="validation",
    seed=1337,
    image_size=image_size,
    batch_size=batch_size,
)

I have this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-6-818e7bae4d07> in <module>
      2 batch_size = 32
      3 
----> 4 train_ds = tf.keras.preprocessing.image_dataset_from_directory(
      5     "PetImages",
      6     validation_split=0.2,

AttributeError: module 'tensorflow_core.keras.preprocessing' has no attribute 'image_dataset_from_directory'

Just to be sure I also tried pip install keras in container but I didn't saw any difference.

I'm not sure if it's my fault or it's some mistake in code.
Docs should probably be more detailed about env (like TF or keras version) in which it should be launched. Generally as a newcomer I didn't see any Keras install instructions on official website at all ☚ī¸

version of keras(tensorflow) [New Document]

Hi,
could you please tell for which version of keras, new guides and documents are published.
I'm pretty sure there is no (keras.preprocessing.image_dataset_from_directory or ...) available on current version of keras

This function is in tf-nightly version

Regarding adding examples of semi-supervised image classification with GANs

Now that the fields of semi-supervised and self-supervised learning are becoming more and more important, I wish to cover an example showing how to train GANs for semi-supervised image classification (can be referred to as weakly supervised learning too).

I have a Colab Notebook that might be useful. If this proposal is accepted I will, of course, adhere to the Keras blog format. I used tf.GradientTape (did not override train_step) as I found it was actually more readable that way.

Note: In order to show the full potential of using GANs in this context, one might need to train longer than shown here.

Adding examples for StyleGAN / StyleGAN2

Hey,

I see that StyleGAN examples are required as mentioned in the contribution.md

I and @ParthivC would like to work on that as part of our MLH Fellowship. We see that there's an open issue #116 but there's no activity there from past few weeks and they have not claimed it yet. So is it fine if we work on it?

OCR end-to-end issue on dataset

what dataset should we use for OCR with CTC loss model, should it be generated on flow while training or any well established dataset.??

Keras.io lacks an easy way to take ownership of an issue (e.g. contributing an example)

Issue

Requests for code examples are currently managed by call_for_contributions.md. This makes it difficult to indicate when someone is currently working on a contribution, and may result in conflicts.

Issue #102 is also a good example of the need for a fix.

Proposed solution

It might be helpful to instead create GitHub issues for each of the requested examples. (Possibly with a unique tag to distinguish them from other issues, which could then be referred to using a link in the Keras.io documentation.) That way, people can comment/assign themselves to indicate that they've started working on an example. This would also allow for discussion about implementation details prior to a PR.

Thanks!

Steps to run the autogen script are not uniform

As stated in PR #21, I couldn't run the autogen command to generate all the types of documentation from the templates.

Attempt

To try and run the autogen, i did the following steps:

  1. Created a virtualenv and got a shell over there.
  2. Got the list of dependencies that were declared in the header of autogen.py and installed them:
pip install pygments jinja2 markdown requests mdx_truly_sane_lists
  1. Running it, I get an error of no module named sphinx.
ModuleNotFoundError: No module named 'sphinx'
  1. So I installed it with pip install sphinx, and ran it again. There's the same error with the black module, so I installed it as well.

  2. At this point, python3 autogen.py make starts to do some output. However, at api/models we find another missing module, tensorflow:

...Processing api/models
Traceback (most recent call last):
  File "/Users/raz/web/keras-io/scripts/docstrings.py", line 113, in import_object
    last_object_got = importlib.import_module(".".join(seen_names))
  File "/Users/raz/web/keras-io/venv/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tensorflow'

Installed it, next.

  1. This is when I hit a wall: running it again, it stopped on api/preprocessing, which needs more than just module installation.
...Processing api/preprocessing
Traceback (most recent call last):
  File "/Users/raz/web/keras-io/scripts/docstrings.py", line 113, in import_object
    last_object_got = importlib.import_module(".".join(seen_names))
  File "/Users/raz/web/keras-io/venv/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tensorflow.keras.preprocessing.image_dataset_from_directory'

Installing keras didn't help.

Follow-up discussion

  • The process of installation could probably use a Pipfile from pipenv. Installation could be as easy as a pipenv sync.
  • The process of generating the files should be separated from the process of checking if the code of the files is correct. I didn't stop to read the code, but I assume that's it, since normally you wouldn't need tensorflow & keras to build a static page.
  • With that made, we could even go a step further and make the CI verify that autogen.py was run before merging the PR by adding a python autogen.py verify. With that, the master branch will never have a mismatch between the templates and the generated documentation.

Should we make a CI?

Or is there already a CI inside Google that we can expose to community contributors?

If there isn't one yet, is there any preference concerning the service that we should use?

Proposal: Update link to discussion about Reuters Dataset topic labels

The section about the Reuters dataset at keras.io ( https://keras.io/api/datasets/reuters/ ) contains a link to this topic discussion at the Keras repository, where I was providing the results of my investigation into the topic label mapping:

keras-team/keras#12072

As the question about the label mapping pops up occasionally, and I found that the results are now even used in some publications, I collected all the code and data from the issue keras-team/keras#12072 here, and I'd like to propose to update the link at keras.io accordingly:

https://github.com/SteffenBauer/KerasTools/tree/master/Reuters_Analysis

Timeseries forecasting with LSTM for weather prediction example contribution

I see that 'Timeseries forecasting with LSTM for weather prediction' example if required by the repository

My team in MLH Fellowship is working on this. Here is the link to the Jupiter notebook: https://github.com/MLH-Fellowship/keras-io/blob/example/timeseries/examples/timeseries/ipynb/timeseries_weather_forecasting.ipynb

I just want to confirm that no one else is not working on this.

Also, as per our understanding, the requirements include a single LSTM layer and nice visualizations. We are still figuring out timeseries_dataset_from_array import due to dependency issues.

Reproducing DQN Breakout result

Thanks for the efforts in writing the rl code examples!

I tried to reproduce the results by running the deep n networks for Breakout in my colab and train my agent with RTX 2060 for a whole day, but the reward didn't increase even after 10 million frames:

running reward: 0.31 at episode 302, frame count 10000
running reward: 0.32 at episode 594, frame count 20000
running reward: 0.28 at episode 884, frame count 30000
...
running reward: 5.94 at episode 35070, frame count 2620000
running reward: 5.07 at episode 35122, frame count 2630000
running reward: 5.35 at episode 35166, frame count 2640000
...
running reward: 0.31 at episode 276916, frame count 11610000
running reward: 0.21 at episode 277225, frame count 11620000
running reward: 0.19 at episode 277536, frame count 11630000

My environment settings:

OS/Driver/Lib Version
Ubuntu 18.04.4 LTS
GPU Driver 450.36.06
CUDA 11.0
Tensorflow 2.2.0
Keras 2.3.1
Note that tf-nightly has been installed.

The document mentions that 10 million frames should be sufficient. Did I miss something?

Any help would be appreciated!

Update logs for VAE example for overriding train_step

I am learning how to override train_step from the guides. It seems to me that instead of returning the values here:
https://github.com/keras-team/keras-io/blob/master/examples/generative/vae.py#L91

We need to update some metrics like this:

self.loss_metric.update_state(total_loss)
self.reconstruction_loss_metric.update_state(reconstruction_loss)
self.kl_loss_metric.update_state(kl_loss)

return {
            "loss": total_loss,
            "reconstruction_loss": reconstruction_loss,
            "kl_loss": kl_loss,
        }

I might be wrong, but I think if we do not do this, the model.fit with verbose option just prints the loss for the final batch of the epoch instead of the average loss on the whole epoch. I guess it would be great if test_step is also added to and explained in these guides.

tokenizer.encode.offsets throws error

for idx, (start, end) in enumerate(tokenized_context.offsets):

AttributeError: 'list' object has no attribute 'offsets'

On Spyder / Python 3.7 on Windows.

Thanks for fascinating topic! Hope to get it working.

Supplying validation data to the fit method of a subclassed model with a train_step (e.g. the VAE guide)

I love the new guides!

I'm following https://github.com/keras-team/keras-io/blob/master/examples/generative/ipynb/vae.ipynb using Colab.

I made the following minimal modifications to the data acquisition and the VAE.fit code:

(x_train, _), (x_valid, _) = keras.datasets.mnist.load_data()
x_train = np.expand_dims(x_train, -1).astype("float32") / 255
x_valid = np.expand_dims(x_valid, -1).astype("float32") / 255

vae = VAE(encoder, decoder)
vae.compile(optimizer=keras.optimizers.Adam())
vae.fit(mnist_digits, epochs=2, steps_per_epoch=3, batch_size=128, validation_data=(x_valid,x_valid)) # small number of steps to quickly demonstrate the error

I receive the following error:
NotImplementedError: When subclassing the Modelclass, you should implement acall method.

When validation data is not supplied as in the guide, the fit method will not raise this error.

I tried adding a test_step but that did not cure the error.

What is your recommended best practice for supplying validation data to the fit method with a subclassed Model that uses a train_step?

Sampling of alpha in wgan_gp.py

Hi,

in the gradient_penalty function in the WGAN class, alpha is sampled from a normal distribution (tf.random.normal) with mean 0.0 and std 1.0.

In the "Improved Training of Wasserstein GANs" paper/code, and all other implementations I have seen, this is sampled from a uniform distribution with min 0.0 and max 1.0.

I cannot find any discussion on sampling this from alternative distributions to what was proposed, but it clearly still works. I just wonder whether anyone can explain the motivation for this deviation from the original model?

Is running the notebook Necessary?

When executing python autogen.py add_example nlp/script_name is it necessary to run the whole notebook again? Some of the use cases for Transfer learning (for instance BERT and GPT2) are quite slow and have an enormous processing hunger. this will make running the add_example command harder.

vae.py

ValueError: The model cannot be compiled because it has no loss to optimize.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤ī¸ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.