Name: Sehun Heo
Type: User
Company: HanwHa Life
Bio: NLP Developer who loves python, pytorch, and data.
I am very interested in developing artificial intelligence assistant systems.
Location: Seoul, South Korea
Blog: https://www.linkedin.com/in/sehun-heo-b6b255193
Sehun Heo's Projects
2ํ๊ธฐ ๊ณต๋ถํ๊ฒ๋ค
AILAW๋ ๋ํ์ ์ธ ์ ๋ฌธ๊ฐ์ฉ ๋ฌธ์์ ํ๋์ธ ํ๋ก์์์ ๋ฒ์ฃ ์ฌ์ค์ ๋ํ ์ฃผ์ ์ ๋ณด๋ฅผ ์๋์ผ๋ก ์ถ์ถํ๊ธฐ ์ํ ๋ฐ์ดํฐ์
๋ฐ ๋ฐฉ๋ฒ๋ก ์ ์ ์ํฉ๋๋ค.
Instruct-tune LLaMA on consumer hardware
A collection of localized (Korean) AWS AI/ML workshop materials for hands-on labs.
Multilabel classification for Toxic comments challenge using Bert
An awesome README template to jumpstart your projects!
Build Python LLM apps in minutes โก๏ธ
The official repository of "ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory".
CoreUI-React-Template์์ ๋ถํ์ํ ๋ถ๋ถ์ ์ ๊ฑฐํ๊ณ ๋ฐ๋ก ์ฌ์ฉํ ์ ์๋๋ก ๋ง๋ค์์ต๋๋ค.
The Evaluation Framework for LLMs
Google์ Colab์ ์ด์ฉํ DeepLearning ๊ณต๋ถ ๋ด์ฉ ๋ฐ ์ค์ต ๋ด์ฉ
Detectron2 is FAIR's next-generation platform for object detection, segmentation and other visual recognition tasks.
Parse all contents of a docx file with python-docx
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
A faster pytorch implementation of faster r-cnn
GAN์ ๋ํด ๊ณต๋ถํ๊ณ ์ง์ ๋ชจ๋ธ์ ๊ตฌ์ถํ์ฌ ํ์ต์์ผ๋ด.
Replacing the shit๐ฉ new version of the feed with the old one
Materials to reproduce our findings in our stories, "Amazon Puts Its Own 'Brands' First Above Better-Rated Products" and "When Amazon Takes the Buy Box, it Doesnโt Give it up"
Finetuning Pipeline
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)
KorSQuAD-pl provides transfer learning codes about korean dataset KorQuAD and english dataset SQuAD for extractive question answering. KorSQuAD-pl implemented through pytorch lightning.
โ๏ธ ๊ตฌ๋ฆ(KULLM): ๊ณ ๋ ค๋ํ๊ต์์ ๊ฐ๋ฐํ, ํ๊ตญ์ด์ ํนํ๋ LLM
Large-scale language modeling tutorials with PyTorch
Easy framework for pre-training language models.
A framework for few-shot evaluation of autoregressive language models.