Comments (6)
I have checked the license and README and confirmed that everything should be fine.
I appreciate your swift response.
Thank you!
from japanese-pretrained-models.
Hi @singletongue! Many thanks for your group sharing the code for bert-japanese.
Indeed we have adopted part of your Wikipedia dataset construction code, and I am sincerely sorry for not conforming to the same license. We will take action soon and update our codebase with an easy-to-see update message.
Best
from japanese-pretrained-models.
Thank you for your swift response, @ZHAOTING.
I appreciate your action regarding this issue.
I'm glad that our codes are utilized for open-source projects like yours.
Best regards
from japanese-pretrained-models.
@singletongue -san, in order to comply with the Apache 2.0 redistribution requirements, we have to 1) keep the original license claim in the adopted file/code, and 2) explicitly note the modification.
Therefore, would you please add a short license claim comment (like this ) at the beginning of your make_corpus_wiki.py file, so that we are able to use the exact same claim in our modified file?
from japanese-pretrained-models.
Thank you for your response, @ZHAOTING.
I have prepended the license claims to the codes.
https://github.com/cl-tohoku/bert-japanese/blob/main/make_corpus_wiki.py
from japanese-pretrained-models.
@singletongue Our license has been updated. And an update log has been added to README.
Could you please check if you are okay with the current license situation and close this issue if things look fine?
Thank you!
from japanese-pretrained-models.
Related Issues (10)
- Tensor size does not match HOT 3
- japanese-roberta-base/README.md typo HOT 1
- rinna RoBERTa's max_length is 510 not 512? HOT 4
- Can I use `rinna/japanese-roberta-base` through `AutoTokenizer` ? HOT 1
- Japanese Wikipedia dump link has changed HOT 1
- The load_docs_from_filepath method in src/task/pretrain_roberta/train.py just return empty list. HOT 2
- Please add "tokenizer_class" in "config.json" HOT 1
- Please update dataβs url. HOT 2
- Train japanese-gpt2-xsmall from scratch HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from japanese-pretrained-models.