pythainlp / attacut Goto Github PK
View Code? Open in Web Editor NEWA Fast and Accurate Neural Thai Word Segmenter
Home Page: https://pythainlp.github.io/attacut/
License: MIT License
A Fast and Accurate Neural Thai Word Segmenter
Home Page: https://pythainlp.github.io/attacut/
License: MIT License
Torch provides a way to save models in Torchscript. Using this format, several computations can be further optimised, hence faster models.
More information: https://pytorch.org/docs/stable/jit.html#frequently-asked-questions
We might also to this for datasets (collated_func)
The datasets on https://pythainlp.github.io/attacut/training.html could not be accessed.
In additional, without taking a look at the datasets, the example in https://pythainlp.github.io/attacut/training.html#how-to-retrain-on-custom-dataset is not clear enough to follow (in my opinion).
Tokenizer(model = "attacut-sc").tokenize("วัดพระแก้วกรุงเทพ") และ word_tokenize("วัดพระแก้วกรุงเทพ", engine="attacut") ถึงได้ผลลัพธ์เป็น ["วัดพระแก้วกรุงเทพ"]
ในขณะที่ Tokenizer(model = "attacut-c").tokenize("วัดพระแก้วกรุงเทพ") ได้ผลลัพธ์เป็น ["วัดพระแก้ว", "กรุงเทพ"]
หากเพิ่มเติมรายละเอียดความแตกต่างระหว่าง attacut-sc
และ attacut-c
ในหน้าหลักของ https://thainlp.org/pythainlp/docs/2.0/api/tokenize.html จะดีมาก
Can you provide corpus? We cannot retraining this model on floydhub.com in readme. Thanks a lot.
It would be faster if we can do multiple lines per inference, aka batching lines.
I'm wondering that if there is any reason that the nptying
dependency is pinned at <=0.3.1
while the latest version of nptying
is 1.4.0
?
@cnlinxi sorry again for my response. You can find the data at https://codeforthailand.s3-ap-southeast-1.amazonaws.com/attacut-related/data.zip
Please unzip and make sure the root directory is at ./data
. The content of the archive contains
Only the first two are relevant for training; sampling-0
means all the dateset, while sampling-10
means only 10 files are used. You can use sampling-10
for quick training.
Before running the training command below, make sure that you have the ./artifacts
directory.
python ./scripts/train.py --model-name seq_sy_ch_conv_concat \
--model-params "embc:8|embs:8|conv:8|l1:6|do:0.1" \
--data-dir ./data/best-syllable-crf-and-character-seq-feature-sampling-0 \
--output-dir ./artifacts/model-xx \
--epoch 2 \
--batch-size 1024 \
--lr 0.001 \
--lr-schedule "step:5|gamma:0.5"
Originally posted by @heytitle in #20 (comment)
Env:
Version :
as Docs pythainlp.tokenize.word_tokenize said Attacut tokenizer provided custom_dict param. However, it seem not working properly.
from pythainlp.corpus.common import thai_words
from pythainlp.util import dict_trie
from pythainlp.tokenize import Tokenizer
custom_words_list = set(thai_words())
custom_words_list |= set(['ต้องการระบาย'])
trie = dict_trie(dict_source=custom_words_list)
_tokenizer = Tokenizer(custom_dict=trie, engine='attacut')
_tokenizer.word_tokenize('ต้องการ')
output
['ต้องการ', 'ระบาย']
expected output
['ต้องการระบาย']
PS1. I also tested with pythainlp.tokenize.word_tokenize. it's worked similar to above.
PS2. newmm and longest engine still work with custom_dict param.
I've been using attacut to process Thai chat corpus. The problem is many texts contain hyperlink, emoji, email addr, etc. I detect these entities and try to use placeholders to replace them before sending them to attacut hoping that attacut will leave them untoudhed. The placeholders I use are long english word string like "EMOJIPLACEHOLDER". It works for other tokenizers like deepcut, but attacut occasionly cut the placeholder into pieces: "EMO JIPLACEHOLDER".
So is there any way that I can tell attacut not to cut a string in a line of text?
Or any way to work around this problem?
Thanks.
using tokenize function caused UnicodeDecodeError from load_dict function in utils.py
This is similar to https://github.com/rkcosmos/deepcut/blob/master/deepcut/deepcut.py#L23
This function will be a proxy function for other modules, such as PyThaiNLP, to consume.
It should receive text and instantiate a AttaCut tokenizer when needed.
Example from https://colab.research.google.com/drive/11nMfWmPGR_82voL37okn4XlxMPVbsu9r#scrollTo=v5sGX_dlQ2_B
It seems that spaces aren't tokenised properly. Please see the issue below:
|Blognone |Tomorrow |2019 |ประกาศ|ชื่อ |speaker |เพิ่มเติม |1 |ท่าน|คือ |คุณธนาธร |จึงรุ่งเรืองกิจ |หัวหน้า|พรรคอนาคต|ใหม่ |จะ|มา|พูด|ใน|หัวข้อ |Hyperloop |and |Path |Skipping |Development
|Strategy |หรือ|แปล|เป็น|ภาษา|ไทย|คือ |"|Hyperloop |กับ|การ|พัฒนา|แบบ|เสือ|กระโดด|"
print
attacut-cli
pip
Is there a way to add custom dictionary? like deepcut?
I used pip install https://github.com/PyThaiNLP/attacut/archive/master.zip
on Windows but it has a installation problems. torch
can't install on Windows.
>pip install https://github.com/PyThaiNLP/attacut/archive/master.zip
Collecting https://github.com/PyThaiNLP/attacut/archive/master.zip
Downloading https://github.com/PyThaiNLP/attacut/archive/master.zip
\ 2.4MB 1.1MB/s
Requirement already satisfied: docopt==0.6.2 in c:\users\tc\anaconda3\lib\site-packages (from attacut==0.0.3.dev0) (0.6.2)
Collecting fire==0.1.3 (from attacut==0.0.3.dev0)
Downloading https://files.pythonhosted.org/packages/5a/b7/205702f348aab198baecd1d8344a90748cb68f53bdcd1cc30cbc08e47d3e/fire-0.1.3.tar.gz
Collecting nptyping==0.2.0 (from attacut==0.0.3.dev0)
Downloading https://files.pythonhosted.org/packages/a5/0f/9b44a1866c7911d03329669d82d2ebb1b8e6dac15803fdb6588549a44193/nptyping-0.2.0-py3-none-any.whl
Collecting numpy==1.17.0 (from attacut==0.0.3.dev0)
Downloading https://files.pythonhosted.org/packages/26/26/73ba03b2206371cdef62afebb877e9ba90a1f0dc3d9de22680a3970f5a50/numpy-1.17.0-cp37-cp37m-win_amd64.whl (12.8MB)
|████████████████████████████████| 12.8MB 3.3MB/s
Requirement already satisfied: python-crfsuite==0.9.6 in c:\users\tc\anaconda3\lib\site-packages (from attacut==0.0.3.dev0) (0.9.6)
Collecting pyyaml==5.1.2 (from attacut==0.0.3.dev0)
Downloading https://files.pythonhosted.org/packages/bc/3f/4f733cd0b1b675f34beb290d465a65e0f06b492c00b111d1b75125062de1/PyYAML-5.1.2-cp37-cp37m-win_amd64.whl (215kB)
|████████████████████████████████| 225kB 3.2MB/s
Requirement already satisfied: six==1.12.0 in c:\users\tc\anaconda3\lib\site-packages (from attacut==0.0.3.dev0) (1.12.0)
Collecting ssg==0.0.4 (from attacut==0.0.3.dev0)
Downloading https://files.pythonhosted.org/packages/05/e0/226b4fb9144d80a3efc474e581097d77abc4e8c3ce8e751469cb1c25e671/ssg-0.0.4-py3-none-any.whl (473kB)
|████████████████████████████████| 481kB 2.2MB/s
Collecting torch==1.2.0 (from attacut==0.0.3.dev0)
ERROR: Could not find a version that satisfies the requirement torch==1.2.0 (from attacut==0.0.3.dev0) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0 (from attacut==0.0.3.dev0)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.