Giter Site home page Giter Site logo

lvapeab / nmt-keras Goto Github PK

View Code? Open in Web Editor NEW
533.0 26.0 129.0 5.76 MB

Neural Machine Translation with Keras

Home Page: http://nmt-keras.readthedocs.io

License: MIT License

Python 97.65% Shell 2.35%
neural-machine-translation keras deep-learning sequence-to-sequence theano machine-learning nmt machine-translation lstm-networks gru

nmt-keras's Introduction

NMT-Keras

Documentation Build Status Open in colab Compatibility license

Neural Machine Translation with Keras.

Library documentation: nmt-keras.readthedocs.io

Attentional recurrent neural network NMT model

alt text

Transformer NMT model

alt text

Features (in addition to the full Keras cosmos): .

Installation

Assuming that you have pip installed and updated (>18), run:

git clone https://github.com/lvapeab/nmt-keras
cd nmt-keras
pip install -e .

for installing the library.

Requirements

NMT-Keras requires the following libraries:

For accelerating the training and decoding on CUDA GPUs, you can optionally install:

For evaluating with additional metrics (Meteor, TER, etc), you can use the Coco-caption evaluation package and set METRICS='coco' in the config.py file. This package requires java (version 1.8.0 or newer).

Usage

Training

  1. Set a training configuration in the config.py script. Each parameter is commented. See the documentation file for further info about each specific hyperparameter. You can also specify the parameters when calling the main.py script following the syntax Key=Value

  2. Train!:

python main.py

Decoding

Once we have our model trained, we can translate new text using the sample_ensemble.py script. Please refer to the ensembling_tutorial for more details about this script. In short, if we want to use the models from the first three epochs to translate the examples/EuTrans/test.en file, just run:

 python sample_ensemble.py 
             --models trained_models/tutorial_model/epoch_1 \ 
                      trained_models/tutorial_model/epoch_2 \
             --dataset datasets/Dataset_tutorial_dataset.pkl \
             --text examples/EuTrans/test.en

Scoring

The score.py script can be used to obtain the (-log)probabilities of a parallel corpus. Its syntax is the following:

python score.py --help
usage: Use several translation models for scoring source--target pairs
       [-h] -ds DATASET [-src SOURCE] [-trg TARGET] [-s SPLITS [SPLITS ...]]
       [-d DEST] [-v] [-c CONFIG] --models MODELS [MODELS ...]
optional arguments:
    -h, --help            show this help message and exit
    -ds DATASET, --dataset DATASET
                            Dataset instance with data
    -src SOURCE, --source SOURCE
                            Text file with source sentences
    -trg TARGET, --target TARGET
                            Text file with target sentences
    -s SPLITS [SPLITS ...], --splits SPLITS [SPLITS ...]
                            Splits to sample. Should be already includedinto the
                            dataset object.
    -d DEST, --dest DEST  File to save scores in
    -v, --verbose         Be verbose
    -c CONFIG, --config CONFIG
                            Config pkl for loading the model configuration. If not
                            specified, hyperparameters are read from config.py
    --models MODELS [MODELS ...]
                            path to the models

Advanced features

Other features such as online learning or interactive NMT protocols are implemented in the interactiveNMT branch.

Resources

Citation

If you use this toolkit in your research, please cite:

@article{nmt-keras:2018,
 journal = {The Prague Bulletin of Mathematical Linguistics},
 title = {{NMT-Keras: a Very Flexible Toolkit with a Focus on Interactive NMT and Online Learning}},
 author = {\'{A}lvaro Peris and Francisco Casacuberta},
 year = {2018},
 volume = {111},
 pages = {113--124},
 doi = {10.2478/pralin-2018-0010},
 issn = {0032-6585},
 url = {https://ufal.mff.cuni.cz/pbml/111/art-peris-casacuberta.pdf}
}

NMT-Keras was used in a number of papers:

Acknowledgement

Much of this library has been developed together with Marc Bolaños (web page) for other sequence-to-sequence problems.

To see other projects following the same philosophy and style of NMT-Keras, take a look to:

TMA: Egocentric captioning based on temporally-linked sequences.

VIBIKNet: Visual question answering.

ABiViRNet: Video description.

Sentence SelectioNN: Sentence classification and selection.

DeepQuest: State-of-the-art models for multi-level Quality Estimation.

Warning!

The Theano backend is not tested anymore, although it should work. There is a known issue with the Theano backend. When running NMT-Keras, it will show the following message:

[...]
raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node

It is not a critical error, the model keeps working and it is safe to ignore it. However, if you want the message to be gone, use the Theano flag optimizer_excluding=scanOp_pushout_output.

Contact

Álvaro Peris (web page): [email protected]

nmt-keras's People

Contributors

davidwilby avatar drj11 avatar kasparpeterson avatar lvapeab avatar midobal avatar vp007-py avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nmt-keras's Issues

How to resume training

Hey,

Is there a way to resume training from where it had stopped? I was training the model and it suddenly stopped due to some reasons. Now I want to resume the training, what could be the possible way?

Thanks

run nmt_model.trainNet(dataset, training_params) appearing error

>>> nmt_model.trainNet(dataset, training_params)

ERROR (theano.gof.opt): SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f90fa72e6d0>
[13/06/2017 15:41:30] SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f90fa72e6d0>
ERROR (theano.gof.opt): Traceback:
[13/06/2017 15:41:30] Traceback:

ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply
    sub_prof = optimizer.optimize(fgraph)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 87, in optimize
    ret = self.apply(fgraph, *args, **kwargs)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 685, in apply
    node = self.process_node(fgraph, node)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 745, in process_node
    node, args)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
    add_as_nitsots)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
    reason='scanOp_pushout_output')
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 391, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 365, in replace_all_validate
    fgraph.validate()
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 256, in validate_
    ret = fgraph.execute_callbacks('validate')
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/fg.py", line 589, in execute_callbacks
    fn(self, *args, **kwargs)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 422, in validate
    raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node

[13/06/2017 15:41:30] Traceback (most recent call last):
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply
    sub_prof = optimizer.optimize(fgraph)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 87, in optimize
    ret = self.apply(fgraph, *args, **kwargs)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 685, in apply
    node = self.process_node(fgraph, node)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 745, in process_node
    node, args)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
    add_as_nitsots)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
    reason='scanOp_pushout_output')
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 391, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 365, in replace_all_validate
    fgraph.validate()
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 256, in validate_
    ret = fgraph.execute_callbacks('validate')
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/fg.py", line 589, in execute_callbacks
    fn(self, *args, **kwargs)
  File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 422, in validate
    raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node


[13/06/2017 15:44:00] Evaluating only every 2 epochs

[13/06/2017 15:44:00] <<< Saving model to trained_models/tutorial_model//epoch_1 ... >>>
[13/06/2017 15:44:00] <<< Saving model_init to trained_models/tutorial_model//epoch_1_structure_init.json... >>>
[13/06/2017 15:44:00] <<< Saving model_next to trained_models/tutorial_model//epoch_1_structure_next.json... >>>
[13/06/2017 15:44:00] <<< Model saved >>>
[13/06/2017 15:45:55] 
<<< Predicting outputs of val set >>>
Sampling 100/100  -  ETA: 0s 
 Total cost of the translations: 96.768829 	 Average cost of the translations: 0.967688
The sampling took: 22.351837 secs (Speed: 0.223518 sec/sample)
[13/06/2017 15:46:17] Decoding beam search prediction ...
[13/06/2017 15:46:17] Evaluating on metric coco
[13/06/2017 15:46:25] Computing coco scores on the val split...
[13/06/2017 15:46:25] Bleu_1: 0.943993237198
[13/06/2017 15:46:25] Bleu_2: 0.927467567158
[13/06/2017 15:46:25] Bleu_3: 0.91578598517
[13/06/2017 15:46:25] Bleu_4: 0.904647321576
[13/06/2017 15:46:25] CIDEr: 8.86298033527
[13/06/2017 15:46:25] METEOR: 0.620198973276
[13/06/2017 15:46:25] ROUGE_L: 0.959634757281
[13/06/2017 15:46:25] TER: 0.0556680161943
[13/06/2017 15:46:25] Done evaluating on metric coco
[13/06/2017 15:46:25] <<< Progress plot saved in trained_models/tutorial_model//epoch_2.jpg >>>
[13/06/2017 15:46:25] <<< Saving model to trained_models/tutorial_model//epoch_2 ... >>>
[13/06/2017 15:46:25] <<< Saving model_init to trained_models/tutorial_model//epoch_2_structure_init.json... >>>
[13/06/2017 15:46:25] <<< Saving model_next to trained_models/tutorial_model//epoch_2_structure_next.json... >>>
[13/06/2017 15:46:25] <<< Model saved >>>

[13/06/2017 15:46:25] <<< Saving model to trained_models/tutorial_model//epoch_2 ... >>>
[13/06/2017 15:46:25] <<< Saving model_init to trained_models/tutorial_model//epoch_2_structure_init.json... >>>
[13/06/2017 15:46:25] <<< Saving model_next to trained_models/tutorial_model//epoch_2_structure_next.json... >>>
[13/06/2017 15:46:25] <<< Model saved >>>

Hi @lvapeab

The above is some log information. What is wrong with that? Looks like this error is not fatal.

TypeError: only integer scalar arrays can be converted to a scalar index

I am running the default dataset with nmt-keras, but when I run the command :python main.py.After 1 epoch,there are something wrong with the following bug code:

Epoch 2/500
102/198 [==============>...............] - ETA: 1:17 - loss: 2.4363
[22/05/2018 07:57:59] <<< Predicting outputs of train set >>>
Starting dataLoad_process_0...
Process Process-6:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self.kwargs)
File "build/bdist.linux-x86_64/egg/keras_wrapper/dataset.py", line 105, in dataLoad
dataAugmentation=data_augmentation)
File "build/bdist.linux-x86_64/egg/keras_wrapper/dataset.py", line 3241, in getXY
x = eval('self.X
' + set_name + '[id_in][last:new_last]')
File "", line 1, in
TypeError: only integer scalar arrays can be converted to a scalar index
Can you tell me why this issue happened and how to fix it?

There are something I want to tell you is that I have tried to use nmt-keras to train a Chinese-English models,also it's using abality is not good,but I think I will make it perfect in the future,Thanks for your help.

AttributeError: 'TranslationModel' object has no attribute 'tokenize_none'

Hello,I have used nmt-keras trained an Chinese-English model.
Now,I 'd like to use you demo-web components to show it at a web page.
I am using the interactive_NMT branch of nmt-keras,but when I try to started it at the readme.md ,at this step,there appeared a problem
I try to run this code:
python ./sample_server.py --dataset datasets/Dataset.pkl --port=8888 --config trained_models/config.pkl --models trained_models/update_15000

there is the wrong code:
from ._conv import register_converters as _register_converters Using Theano backend. [17/04/2018 10:46:46] Using NumPy C-API based implementation for BLAS functions. [17/04/2018 10:46:47] CACHEDIR=/home/yanmengqi/.cache/matplotlib [17/04/2018 10:46:47] Using fontManager instance from /home/yanmengqi/.cache/matplotlib/fontList.json [17/04/2018 10:46:47] backend agg version v2.2 [17/04/2018 10:46:47] Loading parameters from trained_models/config.pkl [17/04/2018 10:46:47] <<< Loading Dataset instance from datasets/Dataset.pkl ... >>> [17/04/2018 10:46:47] <<< Dataset instance loaded >>> Traceback (most recent call last): File "./sample_server.py", line 469, in <module> main() File "./sample_server.py", line 349, in main tokenize_f = eval('dataset.' + params.get('TOKENIZATION_METHOD', 'tokenize_none')) File "<string>", line 1, in <module> AttributeError: 'TranslationModel' object has no attribute 'tokenize_none'

How to remove keras_wrapper from my linux environment?

Hi @lvapeab

I want to remove the previous keras_wrapper from linux environment, and reimplement the keras_wrapper from scratch. I had deleted the keras_wrapper path from ~/.bashrc, but it still not work.

File "build/bdist.linux-x86_64/egg/keras_wrapper/dataset.py", line 859, in setOutput

Confused on several concepts

Hi @lvapeab

Can you help me explain the following concepts? I am a bit confused when reading the source code of keras.

  • inbound_layers
  • outbound_layer here
  • inbound_nodes
  • outbound_nodes here

Regards

Get predictions hidden states

How can I get the last vectors before the softmax in decoding time?
more precisly the Ui vectors in this figure - https://raw.githubusercontent.com/lvapeab/nmt-keras/master/examples/documentation/attention_nmt_model.png?token=AEf6E5RhGVqGRSmYi87EbtiGZK7lPxrFks5ZAx-KwA%3D%3D

I want to predict on some test set and get each example's correspinding Ui's
I will prefer to get the ones which corresponding to the best beam search, but I will be glad to get them for beamsearch = 0, meaning for no beam search.

Some pre-trained weird warning

I took the necessary pre-processing steps using the utils directory (running the script "preprocess_binary_word_vectors.py" on GoogleNews-vectors-negative300.bin)
But I still getting this warning :
'It seems that the pretrained word vectors provided for the target text are not in npy format.'
And its also suggest that I should run the "preprocess_binary_word_vectors.py", which I already did, should I just ignore it?

Thanks in advance :)

Problems with coverage penalty

Hi :)
When I'm changing the coverage to true :
COVERAGE_PENALTY = True # Apply source coverage penalty
COVERAGE_NORM_FACTOR = 0.2 # Coverage penalty factor

I get the following error on evaluation time :
nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 2140, in predictBeamSearchNet
length_penalties = [1.0 for _ in len(samples)]
TypeError: 'int' object is not iterable

I don't know if its matter but the length penalty is still set to False.
I think that the same problem will return on line 2272 in the same file:
coverage_penalties = [0.0 for _ in len(samples)]

Thanks again for all of your help, I'm really appreciating it .

Error while training with the following command python main.py

Error while training with the following command python main.py.

Finally training hangs with the following error. How do we resolve the error?

[21/05/2018 14:41:05] Evaluating on metric coco
Traceback (most recent call last):
File "main.py", line 444, in
train_model(parameters, args.dataset)
File "main.py", line 167, in train_model
nmt_model.trainNet(dataset, training_params)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras-wrapper\keras_wrapper\cnn_model.py", line 829, in trainNet
self.__train(ds, params)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras-wrapper\keras_wrapper\cnn_model.py", line 1088, in __train
initial_epoch=params['epoch_offset'])
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras\keras\engine\training.py", line 1260, in fit_generator
initial_epoch=initial_epoch)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras\keras\engine\training_generator.py", line 229, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras\keras\callbacks.py", line 76, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras-wrapper\keras_wrapper\extra\callbacks.py", line 277, in on_epoch_end
self.evaluate(epoch, counter_name='epoch')
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras-wrapper\keras_wrapper\extra\callbacks.py", line 486, in evaluate
split=s)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\keras-wrapper\keras_wrapper\extra\evaluation.py", line 60, in get_coco_score
score, _ = scorer.compute_score(refs, hypo)
File "c:\users\f.fernandes\appdata\local\programs\python\python36\scripts\src\coco-caption\pycocoevalcap\ter\ter.py", line 50, in compute_score
with open(self.ref_filename, 'w', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/1867076.ref'

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[50,68,394054]

I am trying to use this code for English-Hindi translation. Following are the details of my corpus
training file = 1761368 lines
testing file = 3537 lines
validation file = 3537 lines
I have a server with 4 GTX 1080 GPUs of 8 Gb each. The htop command shows the following result when this program is not running.
image

I tried reducing batch size to 16 but nothing worked. Also, I have edited following parameters in config file, MAX_INPUT_TEXT_LEN = 11351 and MAX_OUTPUT_TEXT_LEN = 7898. I tried to run the program with default parameters but still, the same error persists.

Please help.
Thanks

can't train with multiple encoder/decoder layers

When I change the config file in the following way:
N_LAYERS_ENCODER = 2
N_LAYERS_DECODER = 2
(all the rest is kept the same)

I get the following :

Traceback (most recent call last):
File "main.py", line 434, in
train_model(parameters, args.dataset)
File "main.py", line 81, in train_model
store_path=params['STORE_PATH'])
File "/home/[email protected]/nmt-keras/model_zoo.py", line 113, in init
eval('self.' + model_type + '(params)')
File "", line 1, in
File "/home/[email protected]/nmt-keras/model_zoo.py", line 468, in AttentionRNNEncoderDecoder
current_rnn_output = shared_proj_h_list-1
File "/home/[email protected]/nmt-keras/src/keras/keras/legacy/layers.py", line 969, in call
return super(Recurrent, self).call(inputs, **kwargs)
File "/home/[email protected]/nmt-keras/src/keras/keras/engine/topology.py", line 561, in call
self.assert_input_compatibility(inputs)
File "/home/[email protected]/nmt-keras/src/keras/keras/engine/topology.py", line 460, in assert_input_compatibility
str(K.ndim(x)))

This didn't happened to me with the same code I downloaded two months ago, maybe some new problematic update you have done recently?

I'm sorry for bothering you with all my questions, but I'm promise to mention you in my acknowledgments section if my article will be published :)

another little problem in 3_decoding_tutorial

Hi @lvapeab

there is some error when i run

>>>filepath = nmt_model.model_path+'/' + 'test' + '_sampling.pred' # results file
>>>import utils
>>> utils.read_write.list2file(filepath, predictions)
Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'read_write'

Should i use the module of keras_wrapper?

>>> import keras_wrapper
>>> keras_wrapper.extra.read_write.list2file(filepath, predictions)
>>>

How dev/test sets words treated on evaluation time

I'm trying to use your code for set to seq translation, meaning I have some set of words(with no order between the words) and I want to generate a sentence, I realized that this could be done easily if I switch the regular attention you have in your code to pointer attention, do you have any advice?

ImportError: No module named keras_wrapper.dataset

Hi~
I use the line of Cloning https://github.com/MarcBS/keras with error, so i download the keras-master.zip from chrome, copy keras under nmt-keras/src after unzip. When i execute python main.py, there is still a error.

Traceback (most recent call last): File "main.py", line 6, in <module> from data_engine.prepare_data import build_dataset File "/home/hanyaqian/coders/keras-tutorial/nmt-keras/data_engine/prepare_data.py, in <module> from keras_wrapper.dataset import Dataset, saveDataset, loadDataset ImportError: No module named keras_wrapper.dataset
Can you give me some help? Thanks!

Error in running main.py without modification

Hi,
I wanted to run main.py, but I got a error like below:
The position the error happened is
ctx_mean = MaskedMean()(annotations)
where it might initialize by means of a MLP.

Anyone knows solution of this?
Thanks.

Using TensorFlow backend.
[02/08/2017 16:19:57] Log file (/home/a/.picloud/cloud.log) opened
[02/08/2017 16:19:57] Running training.
[02/08/2017 16:19:57] Building EuTrans_esen dataset
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Creating vocabulary for data with id 'target_text'.
[02/08/2017 16:19:57] Total: 513 unique words in 9900 sentences with a total of 98304 words.
[02/08/2017 16:19:57] Creating dictionary of all words
[02/08/2017 16:19:57] Loaded "train" set outputs of type "text" with id "target_text" and length 9900.
[02/08/2017 16:19:57] Loaded "train" set inputs of type "file-name" with id "raw_target_text".
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Loaded "val" set outputs of type "text" with id "target_text" and length 100.
[02/08/2017 16:19:57] Loaded "val" set inputs of type "file-name" with id "raw_target_text".
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Loaded "test" set outputs of type "text" with id "target_text" and length 2996.
[02/08/2017 16:19:57] Loaded "test" set inputs of type "file-name" with id "raw_target_text".
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Creating vocabulary for data with id 'source_text'.
[02/08/2017 16:19:57] Total: 686 unique words in 9900 sentences with a total of 96172 words.
[02/08/2017 16:19:57] Creating dictionary of all words
[02/08/2017 16:19:57] Loaded "train" set inputs of type "text" with id "source_text" and length 9900.
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Reusing vocabulary named "target_text" for data with id "state_below".
[02/08/2017 16:19:57] Loaded "train" set inputs of type "text" with id "state_below" and length 9900.
[02/08/2017 16:19:57] Loaded "train" set inputs of type "file-name" with id "raw_source_text".
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Loaded "val" set inputs of type "text" with id "source_text" and length 100.
[02/08/2017 16:19:57] Loaded "val" set inputs of type "ghost" with id "state_below" and length 100.
[02/08/2017 16:19:57] Loaded "val" set inputs of type "file-name" with id "raw_source_text".
[02/08/2017 16:19:57] Applying tokenization function: "tokenize_none".
[02/08/2017 16:19:57] Loaded "test" set inputs of type "text" with id "source_text" and length 2996.
[02/08/2017 16:19:57] Loaded "test" set inputs of type "ghost" with id "state_below" and length 2996.
[02/08/2017 16:19:57] Loaded "test" set inputs of type "file-name" with id "raw_source_text".
[02/08/2017 16:19:57] Keeping 1 captions per input on the val set.
[02/08/2017 16:19:57] Samples reduced to 100 in val set.
[02/08/2017 16:19:57] <<< Saving Dataset instance to datasets//Dataset_EuTrans_esen.pkl ... >>>
[02/08/2017 16:19:57] <<< Dataset instance saved >>>
[02/08/2017 16:19:57] <<< Building GroundHogModel Translation_Model >>>
Traceback (most recent call last):
File "/home/a/workspace/test/nmt/nmt-keras-master/main.py", line 409, in
train_model(parameters, args.dataset)
File "/home/a/workspace/test/nmt/nmt-keras-master/main.py", line 79, in train_model
store_path=params['STORE_PATH'])
File "/home/a/workspace/test/nmt/nmt-keras-master/model_zoo.py", line 113, in init
eval('self.' + model_type + '(params)')
File "", line 1, in
File "/home/a/workspace/test/nmt/nmt-keras-master/model_zoo.py", line 344, in GroundHogModel
ctx_mean = MaskedMean()(annotations)

File "/home/a/다운로드/src/keras/keras/engine/topology.py", line 596, in call
output = self.call(inputs, **kwargs)
File "/home/a/다운로드/src/keras/keras/layers/core.py", line 1085, in call
return K.mean(mask[:, :, None] * x, axis=1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 829, in binary_op_wrapper
y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 676, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 741, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 614, in _TensorTensorConversionFunction
% (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype bool for Tensor with dtype float32: 'Tensor("annotations_batch_normalization/add_4:0", shape=(?, ?, 128), dtype=float32)'

ImportError: No module named beam_search_interactive

I want to run the code in demo-web, but I always get this error when I use this command to start the php server.
python ./sample_server.py --dataset datasets/Dataset.pkl --port=8888 --config trained_models/config.pkl --models trained_models/update_15000

NoneType object cannot be interpreted as an integer

I am following the https://nmt-keras.readthedocs.io/en/latest/tutorial.html#dataset-tutorial. Everything runs just fine until I reach 'keep_n_captions(ds, repeat=1, n=1, set_names=['val'])'. Error is:

[26/01/2018 11:37:44] Keeping 1 captions per input on the val set.
Traceback (most recent call last):
File "/Users/name/install/nmt-keras/datasetspaneng.py", line 81, in
keep_n_captions(ds, repeat=1, n=1, set_names=['val'])
File "/Users/name/install/nmt-keras/data_engine/prepare_data.py", line 272, in keep_n_captions
for i in range(0, n_samples, repeat):
TypeError: 'NoneType' object cannot be interpreted as an integer

ImportError: cannot import name AlphaRegularizer

Hi !
Thanks for your kind to evaluate NMT-KERAS.
I did try several versions of keras to deal with the trouble of "ImportError: cannot import name AlphaRegularizer"
Could you help me?

----> 6 from model_zoo import TranslationModel
7 import utils
8 from keras_wrapper.cnn_model import loadModel

D:\python-2.7.14\src\voice\nmt-py27-win-keras\nmt-keras\model_zoo.py in ()
5 from keras.models import model_from_json, Model
6 from keras.optimizers import Adam, RMSprop, Nadam, Adadelta, SGD, Adagrad, Adamax
----> 7 from keras.regularizers import l2, AlphaRegularizer
8 from keras_wrapper.cnn_model import Model_Wrapper
9 from keras_wrapper.extra.regularize import Regularize

ImportError: cannot import name AlphaRegularizer

About Multi-Source Neural Translation

Last contact was ten months ago,when I was looking for an internship. Now I am a full-time employee at a satisfied company, thank you for helping me before. Excuse me, have you ever done an experiment on Multi-Source Neural Translation? Compared with encoder-decoder NMT, how the performance of Multi-Source Neural Translation, for example, training time, hardware requirements and so on.

I'd like to achieve a NMT

Hello,I'd like to achieve Chinese and English Neural Machine Translation,Can I just change the training set(Chinese and English)?
Do I need other operation?

OSError: [Errno 2] No such file or directory

Epoch 1/500
198/198 [==============================] - 156s 789ms/step - loss: 6.4545

[12/03/2018 19:22:10] <<< Predicting outputs of val set >>>
Sampling 100/100 - ETA: 0s
Total cost of the translations: 851.135783 Average cost of the translations: 8.511358
The sampling took: 12.092935 secs (Speed: 0.120929 sec/sample)

[12/03/2018 19:22:22] Prediction output 0: target_text (text)
[12/03/2018 19:22:22] Decoding beam search prediction ...
[12/03/2018 19:22:22] Using heuristic 0
[12/03/2018 19:22:22] Evaluating on metric coco
Traceback (most recent call last):
File "main.py", line 419, in
train_model(parameters, args.dataset)
File "main.py", line 164, in train_model
nmt_model.trainNet(dataset, training_params)
File "/home/yanmengqi/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 813, in trainNet
self.__train(ds, params)
File "/home/yanmengqi/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 1049, in __train
initial_epoch=params['epoch_offset'])
File "/home/yanmengqi/nmt-keras/src/keras/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/yanmengqi/nmt-keras/src/keras/keras/engine/training.py", line 2363, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/home/yanmengqi/nmt-keras/src/keras/keras/callbacks.py", line 77, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/home/yanmengqi/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 264, in on_epoch_end
self.evaluate(epoch, counter_name='epoch')
File "/home/yanmengqi/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 470, in evaluate
split=s)
File "/home/yanmengqi/nmt-keras/src/keras-wrapper/keras_wrapper/extra/evaluation.py", line 55, in get_coco_score
scorers.append((Meteor(language=extra_vars['language']), "METEOR"))
File "/home/yanmengqi/nmt-keras/src/coco-caption/pycocoevalcap/meteor/meteor.py", line 20, in init
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=d)
File "/usr/lib/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

I have faced this problem,and I have searched but have nothing.Can you tell me how to solve this problem?
我遇到了这个问题,请问怎样解决这个问题?

error in the master version

  1. python main.py (no modifications to config.py)
...
[28/09/2017 08:00:38] Evaluating on metric coco
Traceback (most recent call last):
  File "main.py", line 410, in <module>
    train_model(parameters, args.dataset)
  File "main.py", line 161, in train_model
    nmt_model.trainNet(dataset, training_params)
  File "/home/user/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 737, in trainNet
    self.__train(ds, params)
  File "/home/user/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 928, in __train
    initial_epoch=params['epoch_offset'])
  File "/home/user/nmt-keras/keras/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/home/user/nmt-keras/keras/keras/engine/training.py", line 1947, in fit_generator
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/home/user/nmt-keras/keras/keras/callbacks.py", line 77, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
  File "/home/user/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 197, in on_epoch_end
    self.evaluate(epoch, counter_name='epoch')
  File "/home/user/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 346, in evaluate
    split=s)
  File "/home/user/nmt-keras/src/keras-wrapper/keras_wrapper/extra/evaluation.py", line 55, in get_coco_score
    scorers.append((Meteor(language=extra_vars['language']), "METEOR"))
  File "/home/user/nmt-keras/coco-caption/pycocoevalcap/meteor/meteor.py", line 21, in __init__
    stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=d)
  File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 390, in __init__
    errread, errwrite)
  File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 1024, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

Solution: write out JAVA_HOME manually, check java jre installation

  1. errors in h5py

Relevant to keras-team/keras#3426

Solution: uninstall h5py, pip install h5py again.

Is this Python 2 only?

Tried installing requirements.txt, got:

  Downloading https://files.pythonhosted.org/packages/a9/f7/bcc7fec63ca0ddedf4e2a7f04c3261ff668ea2e580962a743de3a4569771/cloud-2.8.5.tar.gz (247kB)
    100% |████████████████████████████████| 256kB 19.4MB/s 
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/ch/v8wwjmsd66z8_x20jstpv8080000gn/T/pip-install-orgleqsd/cloud/setup.py", line 23
        print 'using regular setup tools'
                                        ^
    SyntaxError: Missing parentheses in call to 'print'. Did you mean print(int 'using regular setup tools')?

some confusion about 4_nmt_model_tutorial

Hi @lvapeab

ctx_mean = MaskedMean()(annotations)
annotations = MaskLayer()(annotations) # We may want the padded annotations

I did not find an explanation about MaskedMean and MaskLayer in the keras document, can you give me a brief explanation? And what do you mean of the padded annotations?

Thanks!

models not saved after each epoch

I tried running your code as it is. But the models are not getting saved after each epoch. There's just one model that was saved with the name, EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001

Please suggest a possible solution
Thanks

a little problem about Dropout

Hi @lvapeab

Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time.

I have understood above. But why to divide the remaining result by 1-p. describe here

Regards

getting actual translations in test time

I have a trained model and now I want to use it to translate.
In the config file I changed MODE from training to sampling and RELOAD to my best epoch model.
The testing seems to going fine but all I get as an output is val.coco file.
Is there some way to get the actual translations in addition? like in the training process in which I get the actual translations on the validation set?

Thanks in advanced, this github is wonderful by the way :)

a small bug in 3_decoding_tutorial

Hi @lvapeab

params_prediction = {'batch_size': 50, 'n_parallel_loaders': 8, 'predict_on_sets': ['test'], 'beam_size': 12, 'maxlen': 50, 'model_inputs': ['source_text', 'state_below'], 'model_outputs': ['target_text'], 'dataset_inputs': ['source_text', 'state_below'], 'dataset_outputs': ['target_text'], 'normalize': True, 'alpha_factor': 0.6 }

may be modified to:

params_prediction = {'max_batch_size': 50, 'n_parallel_loaders': 8, 'predict_on_sets': ['test'], 'beam_size': 12, 'maxlen': 50, 'model_inputs': ['source_text', 'state_below'], 'model_outputs': ['target_text'], 'dataset_inputs': ['source_text', 'state_below'], 'dataset_outputs': ['target_text'], 'normalize': True, 'alpha_factor': 0.6 }

batch_size -> max_batch_size

Invalid requirement: --allow-external keras

I 'd like to install the Interactive_NMT at my new Mac.I have downloaded the Multimodal_keras_wrapper follow the step.but now ,I have something wrong with my code.
I am following this guide from your github,this is the guide address:https://github.com/lvapeab/multimodal_keras_wrapper/blob/master/README.md

when I have excuted the code:git clone https://github.com/MarcBS/multimodal_keras_wrapper.git
export PYTHONPATH=$PYTHONPATH:/path/to/multimodal_keras_wrapper

and when I run this code:pip install -r requirements.txt
there is something wrong happened, this is the wrong code:

Usage: pip [options]

Invalid requirement: --allow-external keras
pip: error: no such option: --allow-external

Thank you very much!

Any tips and tricks to up the performance?

Hi

i had managed to setup and run the NMT. However, the result does not reflect well. I am using ~20K of training data for each of the language corpus and 3K for the evaluation data when using the default config out of the box running for 20 epoches. Any tips to improve the translation acurracy? the translated sentences are quite often too different from the original text in terms of context.

i also noticed that there are a few measurements score such as BLEU1--4, TER, ROUGE etc. May i know what each represent?

run main.py with error

[25/05/2017 11:57:37] Training parameters: {'reload_epoch': 0, 'shuffle': True, 'verbose': 1, 'data_augmentation': False, 'class_weights': None, 'epochs_for_save': 1, 'lr_gamma': 0.8, 'start_eval_on_epoch': 1, 'n_epochs': 500, 'patience': 20, 'mean_substraction': True, 'normalize': False, 'n_parallel_loaders': 1, 'each_n_epochs': 1, 'patience_check_split': 'val', 'lr_decay': None, 'homogeneous_batches': False, 'batch_size': 50, 'metric_check': 'Bleu_4', 'num_iterations_val': None, 'epoch_offset': 0, 'joint_batches': 4, 'maxlen': 50, 'eval_on_epochs': True, 'eval_on_sets': [], 'extra_callbacks': [<keras_wrapper.extra.callbacks.EvalPerformance object at 0x7fc1dfd6fad0>, <keras_wrapper.extra.callbacks.Sample object at 0x7fc1e5723ed0>]}
ERROR (theano.gof.opt): SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7fc216a1a110>
[25/05/2017 11:57:41] SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7fc216a1a110>
ERROR (theano.gof.opt): Traceback:
[25/05/2017 11:57:41] Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 87, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 685, in apply
node = self.process_node(fgraph, node)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 745, in process_node
node, args)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
add_as_nitsots)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
reason='scanOp_pushout_output')
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 391, in replace_all_validate_remove
chk = fgraph.replace_all_validate(replacements, reason)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 365, in replace_all_validate
fgraph.validate()
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 256, in validate_
ret = fgraph.execute_callbacks('validate')
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/fg.py", line 589, in execute_callbacks
fn(self, *args, **kwargs)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 422, in validate
raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node

[25/05/2017 11:57:41] Traceback (most recent call last):
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/opt.py", line 87, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 685, in apply
node = self.process_node(fgraph, node)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 745, in process_node
node, args)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
add_as_nitsots)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
reason='scanOp_pushout_output')
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 391, in replace_all_validate_remove
chk = fgraph.replace_all_validate(replacements, reason)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 365, in replace_all_validate
fgraph.validate()
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 256, in validate_
ret = fgraph.execute_callbacks('validate')
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/fg.py", line 589, in execute_callbacks
fn(self, *args, **kwargs)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 422, in validate
raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node

Epoch 1/500
9850/9900 [============================>.] - ETA: 0s - loss: 2.4758[25/05/2017 12:00:49]
<<< Predicting outputs of val set >>>
Sampling 100/100 - ETA: 0s
Total cost of the translations: 203.730323 Average cost of the translations: 2.037303
The sampling took: 28.331956 secs (Speed: 0.283320 sec/sample)
[25/05/2017 12:01:17] Decoding beam search prediction ...
[25/05/2017 12:01:17] Evaluating on metric coco
bash: tercom.7.25: No such file or directory
Traceback (most recent call last):
File "main.py", line 370, in
train_model(parameters, args.dataset)
File "main.py", line 131, in train_model
nmt_model.trainNet(dataset, training_params)
File "build/bdist.linux-x86_64/egg/keras_wrapper/cnn_model.py", line 702, in trainNet
File "build/bdist.linux-x86_64/egg/keras_wrapper/cnn_model.py", line 864, in __train
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/Keras-1.2.0-py2.7.egg/keras/engine/training.py", line 1623, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/Keras-1.2.0-py2.7.egg/keras/callbacks.py", line 43, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "build/bdist.linux-x86_64/egg/keras_wrapper/extra/callbacks.py", line 185, in on_epoch_end
File "build/bdist.linux-x86_64/egg/keras_wrapper/extra/callbacks.py", line 320, in evaluate
File "build/bdist.linux-x86_64/egg/keras_wrapper/extra/evaluation.py", line 69, in get_coco_score
File "/home/hanyaqian/anaconda2/lib/python2.7/site-packages/Coco_Caption-0.0-py2.7.egg/pycocoevalcap/ter/ter.py", line 55, in compute_score
return float(score), None
ValueError: could not convert string to float:

Can you tell me some detial about the error? Thanks very much!

attention mechanism

can you please explain which attention mechanism you have used global or local ??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.