Giter Site home page Giter Site logo

emilwallner / screenshot-to-code Goto Github PK

View Code? Open in Web Editor NEW
16.2K 537.0 1.5K 50.43 MB

A neural network that transforms a design mock-up into a static website.

License: Other

Jupyter Notebook 38.73% Python 12.34% HTML 40.73% CSS 8.19%
keras deep-learning seq2seq encoder-decoder lstm floydhub machine-learning cnn cnn-keras jupyter-notebook

screenshot-to-code's Introduction

screenshot-to-code's People

Contributors

bhageena avatar dfarren avatar emilwallner avatar felixonmars avatar masa-shin avatar pkellz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

screenshot-to-code's Issues

Post all issues related to FloydHub here:

i am trying to install floyd on mac:
pip install floyd-cli
fails. i tried it with sudo as well. still fails.

i m interested in development. lemme know how to set up the environment on mac

Question on result

The program worked without errors, but the result was far from I expected ( maybe I did something wrong? ).

here is the image I used:[deleted]

And here is the result: https://jsfiddle.net/b1zt7vsh/

What I did:

  1. Put the image under Screenshot-to-code-in-Keras/floydhub/HTML/resources/images folder.
  2. Rewrote the line 131 of HTML.ipynb (change the path to the image).
  3. Rewrote the line 190 of HTML.ipynb (epochs to be 300).
  4. Ran All cells of HTML.ipynb.

The value of loss function was below 0.001.

I would greatly appreciated it if you could tell me what I did was correct or not.

Errors on Python3 in "Hello World"

After executing 1st block:

/usr/local/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)

After executing 3rd block:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-18d491e6df48> in <module>()
      2 images = []
      3 for i in range(2):
----> 4     images.append(img_to_array(load_img('screenshot.jpg', target_size=(224, 224))))
      5 images = np.array(images, dtype=float)
      6 # Preprocess input for the VGG16 model

/usr/local/lib/python3.6/site-packages/keras/preprocessing/image.py in load_img(path, grayscale, target_size, interpolation)
    343     """
    344     if pil_image is None:
--> 345         raise ImportError('Could not import PIL.Image. '
    346                           'The use of `array_to_img` requires PIL.')
    347     img = pil_image.open(path)

ImportError: Could not import PIL.Image. The use of `array_to_img` requires PIL.

I guess this errors is reason for not working example.

Bootstrap training with large sample causes ResourceExhaustedError: OOM

Hi All,

Specs:
Core i7- 6700 Ram: DDR4 16GB GPU: GTX1060 6GB VRAM

I know this is not enough to train full data set. But i cant use floydhub at the moment

Im trying to train my mode with the dataset provided here
And im using bootstrap_generator.ipynb to load and process samples in batches

Here is the error log that i get when training:

`ResourceExhaustedError                    Traceback (most recent call last)
<ipython-input-8-deb30b24a084> in <module>()
----> 1 model.fit_generator(data_generator_simple(texts, train_features, 1, 150), steps_per_epoch=1400, epochs=50, callbacks=callbacks_list, verbose=1, max_queue_size=1)

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1424             use_multiprocessing=use_multiprocessing,
   1425             shuffle=shuffle,
-> 1426             initial_epoch=initial_epoch)
   1427 
   1428     @interfaces.legacy_generator_methods_support

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\engine\training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
    189                 outs = model.train_on_batch(x, y,
    190                                             sample_weight=sample_weight,
--> 191                                             class_weight=class_weight)
    192 
    193                 if not isinstance(outs, list):

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\engine\training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1218             ins = x + y + sample_weights
   1219         self._make_train_function()
-> 1220         outputs = self.train_function(ins)
   1221         if len(outputs) == 1:
   1222             return outputs[0]

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\backend\tensorflow_backend.py in __call__(self, inputs)
   2659                 return self._legacy_call(inputs)
   2660 
-> 2661             return self._call(inputs)
   2662         else:
   2663             if py_any(is_tensor(x) for x in inputs):

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\keras\backend\tensorflow_backend.py in _call(self, inputs)
   2629                                 symbol_vals,
   2630                                 session)
-> 2631         fetched = self._callable_fn(*array_vals)
   2632         return fetched[:len(self.outputs)]
   2633 

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\tensorflow\python\client\session.py in __call__(self, *args)
   1452         else:
   1453           return tf_session.TF_DeprecatedSessionRunCallable(
-> 1454               self._session._session, self._handle, args, status, None)
   1455 
   1456     def __del__(self):

d:\uzair\screenshot-to-code-in-keras\venv\lib\site-packages\tensorflow\python\framework\errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
    517             None, None,
    518             compat.as_text(c_api.TF_Message(self.status.status)),
--> 519             c_api.TF_GetCode(self.status.status))
    520     # Delete the underlying status object from memory otherwise it stays alive
    521     # as there is a reference to status from this from the traceback due to

ResourceExhaustedError: OOM when allocating tensor with shape[131072,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: training/RMSprop/mul_56 = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](RMSprop/lr/read, training/RMSprop/clip_by_value_18)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.`

One thing i noted is that this happens at 97th sample exactly (i.e. when generator sends 97th sample for processing.). Even if i change the generator to send 97th sample on first sample, it crashes with the same error. So im guessing there is something wrong with that sample. Moreover if we skip this sample, there are few others that cause the issue too.

Edit: If i use bootstrap.ipynb to train with same sample, it works just fine. Only reason I am using bootstrap_generator is because i cannot load all the models to memory and hence have to yield chunks by using generator

how can I use this to generate the ideal HTML code(the code you've shown in readme.md)

(My English is not very good and may be a bit confusing, but i do hope you will help me)
I found my output was similar to the questioner of question #20 , but due to my poor programming skill, i can't understand your answer to quetion #20 … I have a few questions:

  1. can I simply run bootstrap.ipynb to get the the ideal HTML code(the code you've shown in readme.md)? if not, how can I achieve this?
  2. where can I download the model I want (to help me generate the ideal HTML code)?
  3. can I achieve my goal without extra training? If not, how can I train?
    Your kindness consideration is highly appreciated. Wish you a gooooood day!

OCR and Font detection, which of these three is better approach?

I want to train the model on font detection and OCR using below links but i'm not sure of the 4 options how best to do it:

  1. Train on top of existing model ie add the new data to the existing dataset
  2. Train the the networks independently but combine the output like ensemble model?
  3. Make a brand new neural network using the logics and algorithms of the other neural networks?

OCR
https://github.com/Tony607/keras-image-ocr/blob/master/image-ocr.ipynb
https://mc.ai/how-to-train-a-keras-model-to-recognize-text-with-variable-length/

Fonts:
https://tsprojectsblog.wordpress.com/2017/08/19/using-a-neuronal-network-for-font-character-detection-in-images/
https://tanmayshah2015.wordpress.com/2015/12/01/synthetic-font-dataset-generation/

Bootstrap version: Adding more flexibility to the DSL vocabulary

The DSL is simple and straight forward: A token per matching html snippet. However, in a real world scenario many classes would overlap and intermix with each other ie:

<div class="col-md-3">{}</div>

could be:

<div class="col-md-3 bg-primary">{}</div>

or

div class="col-md-3 border border-primary">{}</div>

It seems the structure of the DSL requires you to have a very large vocabulary with thousands of tokens but it would still not solve the flexibility problem. How would you approach solving this problem?

Is an emmet style vocabulary like the below possible?

{
"quadruple.border+border-primary": "<div class=\"col-lg-3 border border-primary \">\n{}\n</div>\n"
}

Running bootstrap.ipynb on GPU on floydhub gives MemoryError

I'm using floydhub and a GPU but when I run bootstrap.ipynb to train on dataset emilwallner/datasets/imagetocode/2:data i get an out of memory error:
Also, what is the difference between bootstrap.ipynb and bootstrap_generator.ipynb?

`---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
in ()
30 return np.array(X), np.array(y), np.array(image_data)
31
---> 32 X, y, image_data = preprocess_data(train_sequences, train_features)

in preprocess_data(sequences, features)
28 X.append(in_seq[-48:])
29 y.append(out_seq)
---> 30 return np.array(X), np.array(y), np.array(image_data)
31
32 X, y, image_data = preprocess_data(train_sequences, train_features)

MemoryError: `

Bootstrap predicting same output for all inputs

So I'm using the datasets from the paper with bootstrap_generator and I have a loss of 0.1212 but the model predicts the same output for every input even on the training data, should I do more epochs? At what stage does it predict different outputs?

Shape Incompatible

I'm attempting to train the data set for HTML.

I downloaded the files from: https://www.floydhub.com/emilwallner/datasets/html_models
It failed at the ipython part

%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
    return false;
}

That was only because I don't have ipython installed.

I tried taking the last org-weights-epoch-XXXX---loss-X.XXXX.hdf5
with the HTML_preloaded_weights.py and got the error below. Am I doing this correctly?

  File "/home/ubuntu/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 1110, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (436, 200) and (441, 200) are incompatible

How can I use my own image?

In /local/Bootstrap/test_model_accuracy.ipynb forth block: train_features, texts = load_data(dir_name)

The image data had been preprocess , it's not a raw image. I want to know how can I preprocess my raw image to the feature that can input into the model?
Can you show me the code to preprocess a raw image file? Thanks.

Trying to load weight data for model of 7 layers

I've been looking everywhere to find weight data that your model of 7 layers would support. On each of the used ones, I get an error: "You are trying to load a weight file containing 8 layers into a model with 7 layers". I also tried adjusting the model to work with 8 layers, but had no luck with it.

It would be of great help if you could provide me a direct link to data that your model would support, or a direct link to code with 8 layer model.

image

get error

jupyter notebook -> ModuleNotFoundError: No module named 'pysqlite2' , my python version is 3.6 , the pysqlite2 module just work on python 2.x

Help with OCR, please

Hey Emil,

thanks a lot for sharing the code. This is extremely helpful!

I know this question has been asked before but thought maybe I could ask it one more time as you might have spent more time on it now.

Were you able to make a model that does both OCR and markup? My main challenge relies in the fact that your approach relies on a vocabulary so somehow this seems to request an a-priori knowledge of the vocabulary.

On my quest to try and make it generalisable, I tried the following:

  • only use the printable characters and the HTML tags
  • changed fit to the fit_generator method of Model (otherwise it wouldn't fit in RAM). I actually only use an NVIDIA P100 and a training batch size of 2 or 3 for RAM reasons
  • grayscale=True for load_image (for RAM reasons, also my text is just black over white background)
  • increase the max length of each sequence (as now it's not words but characters)
  • I didn't change the rest of the code
  • for now, I'm training with like 350 images all generated automatically from HTML snippets. The snippet only contains a few tags (<p>, <h1>...). I also do not pass to the training the <body> tag to the training as this is the same for all images.

With this, I'm not able to bring the model loss below ~1.8, even after 100 epochs.

Do you have any leads, ideas on how to go forward?

Thanks!

feat: Add support for JavaScript

Hi all,

There would be a way to add support for a JavaScript runtime in order to give a static website more dynamism to the website?

can i use the new picture to generate?

i upload a picture of mine in the path"./resuorce/images"
image

and restart kernel &run all cells ,got an result which far away from my expect, like below :
image
i wonder what kind of pictures the pre-trained model contain? If it has a limited? Or something wrong with me in understanding the process of using a new pic?

ValueError: Invalid bias shape: (512,)

I train the model in /local/Bootstrap/bootstrap.ipynb and save the model as a hdf5 file.
Then I download the model.json from https://www.floydhub.com/emilwallner/datasets/imagetocode and change the path of modal.json and hdf5 in local/Bootstrap/test_model_accuracy.ipynb.
Finally I run the code in local/Bootstrap/test_model_accuracy.ipynb to test the accuracy.
However, it hints ValueError: Invalid bias shape: (512,), shown as below
I dont read the model.json line by line, but I guess the error is caused by unmatch between the json config and hdf5 file, which are expected to be matched.
Anyway, how should I correct this?
wx20180827-182645 2x

ModuleNotFoundError

when I am running /floydhub/html HTML.ipynb, the first cell comes out an error


ModuleNotFoundError Traceback (most recent call last)
in ()
7 from keras.layers import Embedding, TimeDistributed, RepeatVector, LSTM, concatenate , Input, Reshape, Dense, Flatten
8 from keras.preprocessing.image import array_to_img, img_to_array, load_img
----> 9 from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
10 import numpy as np

ModuleNotFoundError: No module named 'keras.applications.inception_resnet_v2'

commit messages

So far most commit messages are not very meaningful.

Can you describe what the abbreviations mean? Also longer and better commit messages are very helpful for other developers and contributors.

Where is example.png in the Floydhub directory

In the Floydhub folder structure below, where is the example.png located that the network uses to test? In the local directory, it's under /resources as example.png but I can't find it under the Floydhub directory. Please advise

|-floydhub #Folder to run the project on Floyhub
| |-Bootstrap #The Bootstrap version
| | |-compiler #A compiler to turn the tokens to HTML/CSS (by pix2code)

warning:Memery error

hi:
I had some problems running the demo(pix2code);
memory error. Run locally
thanks!Hopefully that will work out

val_loss does not decrease

I trained model the bootstrap version for about 50 epochs, which contains 10 pictures.
Specifically, I use 90 percent sample as train sample and 10 percent sample as test sample.
Till now, the loss function decrease from around 2.7 to around 0.05 whereas the val_loss keeps almost the same, ranging from 1 to 3.
I am quite confused.
should I use more samples? (now I only use 10 pictures in the code as samples)
or should I make more training?
I appreciate your help.

Bootstrap generator not working on Google Colab (Keras 2.4, Tensorflow 2.3.0)

This error only happens on the data generator in the bootstrap_generator file. Bootstrap version trains but it cant take a dataset larger than 25 files due to memory error making it practically useless. Gist here: https://colab.research.google.com/gist/amahendrakar/39db0b14bce096a12d6f4c9961f687de/42038.ipynb
Error I get

ValueError                                
Traceback (most recent call last)
<ipython-input-4-6927891f43ca> in <module>()
  1 # test the data generator
  2 generator = data_generator(texts, train_features, 1, max_sequence)
----> 3 loaded_model.fit_generator(generator, steps_per_epoch=steps, epochs=5, callbacks=callbacks_list, verbose=1)
  4 loaded_model.save(mydrive + '/output/weights.hdf5')

12 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971           except Exception as e:  # pylint:disable=broad-except
972             if hasattr(e, "ag_error_metadata"):
--> 973               raise e.ag_error_metadata.to_exception(e)
974             else:
975               raise

ValueError: in user code:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
    return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
    outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:757 train_step
    self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:2737 _minimize
    trainable_variables))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:562 _aggregate_gradients
    filtered_grads_and_vars = _filter_grads(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1271 _filter_grads
    ([v.name for _, v in grads_and_vars],))

ValueError: No gradients provided for any variable: ['embedding_1/embeddings:0', 'lstm_1/lstm_cell/kernel:0', 'lstm_1/lstm_cell/recurrent_kernel:0', 'lstm_1/lstm_cell/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'conv2d_2/kernel:0', 'conv2d_2/bias:0', 'conv2d_3/kernel:0', 'conv2d_3/bias:0', 'conv2d_4/kernel:0', 'conv2d_4/bias:0', 'conv2d_5/kernel:0', 'conv2d_5/bias:0', 'conv2d_6/kernel:0', 'conv2d_6/bias:0', 'conv2d_7/kernel:0', 'conv2d_7/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0', 'lstm_2/lstm_cell_1/kernel:0', 'lstm_2/lstm_cell_1/recurrent_kernel:0', 'lstm_2/lstm_cell_1/bias:0', 'lstm_3/lstm_cell_2/kernel:0', 'lstm_3/lstm_cell_2/recurrent_kernel:0', 'lstm_3/lstm_cell_2/bias:0', 'lstm_4/lstm_cell_3/kernel:0', 'lstm_4/lstm_cell_3/recurrent_kernel:0', 'lstm_4/lstm_cell_3/bias:0', 'dense_3/kernel:0', 'dense_3/bias:0'].

why we need to limit tokens to input sequence?

sorry for my ignorance but why we need to limit tokens to input sequence?
I don't understand the function of it, since we training the model why not just put as many token as we can?

# Initialize the function to create the vocabulary 
tokenizer = Tokenizer(filters='', split=" ", lower=False)
# Create the vocabulary 
tokenizer.fit_on_texts([load_doc('resources/bootstrap.vocab')])

# Add one spot for the empty word in the vocabulary 
vocab_size = len(tokenizer.word_index) + 1
# Map the input sentences into the vocabulary indexes
train_sequences = tokenizer.texts_to_sequences(texts)
# The longest set of boostrap tokens
max_sequence = max(len(s) for s in train_sequences)
# Specify how many tokens to have in each input sentence
max_length = 48

def preprocess_data(sequences, features):
    X, y, image_data = list(), list(), list()
    for img_no, seq in enumerate(sequences):
        for i in range(1, len(seq)):
            # Add the sentence until the current count(i) and add the current count to the output
            in_seq, out_seq = seq[:i], seq[i]
            # Pad all the input token sentences to max_sequence
            in_seq = pad_sequences([in_seq], maxlen=max_sequence)[0]
            # Turn the output into one-hot encoding
            out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
            # Add the corresponding image to the boostrap token file
            image_data.append(features[img_no])
            # Cap the input sentence to 48 tokens and add it
            X.append(in_seq[-48:])
            y.append(out_seq)
    return np.array(X), np.array(y), np.array(image_data)

X, y, image_data = preprocess_data(train_sequences, train_features)

catch_config_error() missing 1 required positional argument: 'app'

E:\software\python3\demo\Screenshot-to-code-in-Keras-master\local>jupyter notebo
ok
Traceback (most recent call last):
File "e:\software\python3\lib\runpy.py", line 184, in run_module_as_main
"main", mod_spec)
File "e:\software\python3\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "E:\software\python3\Scripts\jupyter-notebook.EXE_main
.py", line 5, i
n
File "e:\software\python3\lib\site-packages\notebook_init
.py", line 25, in

from .nbextensions import install_nbextension
File "e:\software\python3\lib\site-packages\notebook\nbextensions.py", line 32
, in
from traitlets.config.manager import BaseJSONConfigManager
File "e:\software\python3\lib\site-packages\traitlets\config_init_.py", lin
e 6, in
from .application import *
File "e:\software\python3\lib\site-packages\traitlets\config\application.py",
line 120, in
class Application(SingletonConfigurable):
File "e:\software\python3\lib\site-packages\traitlets\config\application.py",
line 291, in Application
def initialize(self, argv=None):
TypeError: catch_config_error() missing 1 required positional argument: 'app'

Why I get this?

Error loading weights

I've tried to load weights, which I've downloaded from https://www.floydhub.com/emilwallner/datasets/html_models.

In HTML.ipynb I've replaced:

# Compile the model
model = Model(inputs=[image_features, language_input], outputs=decoder_output)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

by

# Compile the model
model = Model(inputs=[image_features, language_input], outputs=decoder_output)
model.load_weights("org-weights-epoch-0900---loss-0.0000.hdf5")
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

I got error:

ValueError: You are trying to load a weight file containing 8 layers into a model with 7 layers.

How to compile and get html for a new image

Hi,
I just came across this github repository. I don't have much knowledge in the area. I just wanted to check the model with a couple of images I have. But the readme doesn't provide proper documentation. Can someone please tell me how to generate code for an image using the model provided in this link

Error on /local/HTML/HTML.ipynb

I tried running the local HTML notebook and I keep getting this error:

ValueError                                Traceback (most recent call last)
<ipython-input-5-c9cfc217c4dd> in <module>()
      1 # Train the neural network
----> 2 model.fit([image_data, X], y, batch_size=16, shuffle=False, epochs=2)

~/.local/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1579             class_weight=class_weight,
   1580             check_batch_axis=False,
-> 1581             batch_size=batch_size)
   1582         # Prepare validation data.
   1583         do_validation = False

~/.local/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
   1412                                     self._feed_input_shapes,
   1413                                     check_batch_axis=False,
-> 1414                                     exception_prefix='input')
   1415         y = _standardize_input_data(y, self._feed_output_names,
   1416                                     output_shapes,

~/.local/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    151                             ' to have shape ' + str(shapes[i]) +
    152                             ' but got array with shape ' +
--> 153                             str(array.shape))
    154     return arrays
    155 

ValueError: Error when checking input: expected input_2 to have shape (None, 8, 8, 1536) but got array with shape (2306, 1536, 8, 8)

Mirroring

Splashtop streamer a mirroring app on PC keeps asking for a code from network administrator.please what do I do

creating dataset

Hi, many thanks for sharing the data and code. how can we take it forward, how can we generate more data apart from synthesised data. can we create same kind of dataset for real time html page. if so, then how can we generate .gui files for that. if you have any resource or any thoughts please do share us.

module 'ast' has no attribute 'AnnAssign

I'm running jupyter on the local version on Windows 10 with python 3.6 and get this error:

PS C:\Git\Screenshot-to-code-in-Keras\local> jupyter notebook
Traceback (most recent call last):
  File "c:\users\mount\appdata\local\programs\python\python36\lib\runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\mount\appdata\local\programs\python\python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\Mount\AppData\Local\Programs\Python\Python36\Scripts\jupyter-notebook.EXE\__main__.py", line 9, in <module>
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\jupyter_core\application.py", line 266, in launch_instance
    return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\traitlets\config\application.py", line 657, in launch_instance
    app.initialize(argv)
  File "<decorator-gen-7>", line 2, in initialize
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\traitlets\config\application.py", line 87, in catch_config_error
    return method(app, *args, **kwargs)
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\notebook\notebookapp.py", line 1628, in initialize
    self.init_webapp()
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\notebook\notebookapp.py", line 1378, in init_webapp
    self.jinja_environment_options,
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\notebook\notebookapp.py", line 159, in __init__
    default_url, settings_overrides, jinja_env_options)
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\notebook\notebookapp.py", line 271, in init_settings
    nbextensions_path=jupyter_app.nbextensions_path,
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\notebook\notebookapp.py", line 1061, in nbextensions_path
    from IPython.paths import get_ipython_dir
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\IPython\__init__.py", line 55, in <module>
    from .terminal.embed import embed
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\IPython\terminal\embed.py", line 15, in <module>
    from IPython.core.interactiveshell import DummyMod, InteractiveShell
  File "c:\users\mount\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 111, in <module>
    _assign_nodes         = (ast.AugAssign, ast.AnnAssign, ast.Assign)
AttributeError: module 'ast' has no attribute 'AnnAssign'

Does the model do OCR?

This project is fantastic, great learning source as well. 😄

Does the model do OCR for the text contained in the various boxes?
Thanks

Pretrained weights

Have you uploaded a set of pretrained weights for this project anywhere?

When I run floydhub/HTML.ipynb,I encounter an problem.

ModuleNotFoundError Traceback (most recent call last)
in ()
7 from keras.layers import Embedding, TimeDistributed, RepeatVector, LSTM, concatenate , Input, Reshape, Dense, Flatten
8 from keras.preprocessing.image import array_to_img, img_to_array, load_img
----> 9 from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
10 import numpy as np

ModuleNotFoundError: No module named 'keras.applications.inception_resnet_v2'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.