We are interested in developing non-English LLMs and transferring knowledge from large language models to smaller ones.
- Development of Solar-based Self-Introduction Correction LLM Model using SFT&DPO
2024.02 ~ 2024.04
- Development of Gemma-2B Korean Pre-trained Model
2024.04 ~ 2024.06
- Application of Quantization to LLM Model using llama.cpp
2024.06 ~ 2024.06
- Development of Korean LLAVA Model using Chat-Vector
2024.06 ~ 2024.07
- Development of Korean Financial LLM Leaderboard
2024.07 ~ 2024.08
- Application of Quantization to LLM Model using TensorRT-LLM&Triton
2024.08 ~ 2024.09
- Development of Gemma2-2B Korean Pre-trained Model(Unsloth Continual Pretrain)
2024.09 ~ 2024.09
- Machine Learning / Deep Learning
- LLM Pretrain / Fine-Tuning
- Quantization / Knowledge Distillation
- ๐ฅ 2023 ์ธ๊ณต์ง๋ฅ์ฝํ ์ธ ์ตํฉ์ฐฝ์๋ฉ AI ์ตํฉ ์ฝํ ์ธ ๊ณต๋ชจ์ (AI+ ์ฝํ ์ธ ๊ฒฐ๊ณผ ๋ถ๋ฌธ) - 1nd placed [overview]
- ๐ฅ 2024 ๊ตญ๋ฆฝ๊ตญ์ด์ ์ธ๊ณต์ง๋ฅ์ ํ๊ตญ์ด ๋ฅ๋ ฅ ํ๊ฐ ๊ฒฝ์ง๋ํ - ๋ํ๋งฅ๋ฝ์ถ๋ก (๋) ๋ถ๋ถ ์์ ์์ [overview]
- ๐ฅ 2024 ๊ตญ๋ฆฝ๊ตญ์ด์ ์ธ๊ณต์ง๋ฅ์ ํ๊ตญ์ด ๋ฅ๋ ฅ ํ๊ฐ ๊ฒฝ์ง๋ํ - ์ผ์๋ํ์์ฝ(๋) ๋ถ๋ถ ํน๋ณ์ ์์ [overview]
- 2024 ๋ฐ์ด์ฝ ์ฌ์ ์ ๋ณด AI ๊ฒ์ ์๊ณ ๋ฆฌ์ฆ ๊ฒฝ์ง๋ํ - Top 9.5% (34/359) [overview]