모두의연구소 flipped school 과정 중 하나인 NLP bootcamp에서 발표한 paper 목록과 그 자료들입니다.
- participant : 강재욱, 권성은, 김경환, 김동화, 김민섭, 김수정, 김승일, 모경현, 박정배, 박희경, 염혜원, 원종국, 윤훈상, 이명재, 이승재, 이일구, 이현준, 정미연, 최우정, 조주현
- faciliator : 김보섭
아래의 논문 순서를 따라 읽으실 뿐들은 "A Neural Conversational Model"은 꼭 읽지않아도 되며, "Convolutional Sequence to Sequence Learning"의 경우, week06에 해당하는 paper를 다 읽고 읽으시는 게 좋습니다. 다시 말하자면, week05와 week06만 순서를 바꿔 읽으시면 됩니다.
Orientation
- Convolutional Neural Networks for Sentence Classification
- Presenter : 최우정
- Paper : https://arxiv.org/abs/1408.5882
- Material : Convolutional Neural Networks for Sentence Classification_최우정.pdf
- Character-level Convolutional Networks for Text Classification
- Presenter : 박정배
- Paper : https://arxiv.org/abs/1509.01626
- Material : Character-level convolutional networks for text classification_박정배.pdf
- Character-Aware Neural Language Models
- Presenter : 이일구
- Paper : https://arxiv.org/abs/1508.06615
- Material : Character-Aware Neural Language Models_이일구.pdf
- A Convolutional Neural Network for Modelling Sentences
- Presenter : 윤훈상
- Paper : https://arxiv.org/abs/1404.2188
- Material : A Convolutional Neural Network for Modelling Sentences_윤훈상.pdf
- Learning Phrase Representation using RNN Encoder-Decoder for Statistical Machine Translation
- Presenter : 염혜원
- Paper : https://arxiv.org/abs/1406.1078
- Material : Learning Phrase Representation using RNN Encoder-Decoder for Statistical Machine Translation_염혜정.pdf
- Sequence to Sequence Learning with Neural Networks
- Presenter : 권성은
- Paper : https://arxiv.org/abs/1409.3215
- Material : Sequence to Sequence Learning with Neural Networks_권성은.pdf
- A Neural Conversational Model
- Presenter : 원종국
- Paper : https://arxiv.org/abs/1506.05869
- Material : A Neural Conversational Model_원종국.pdf
- Convolutional Sequence to Sequence Learning
- Presenter : 모경현
- Paper : https://arxiv.org/abs/1705.03122
- Material : Convolutional Sequence to Sequence Learning_모경현.pdf
- Neural Machine Translation by Jointly Learning to Align and Translate
- Presenter : 박희경
- Paper : https://arxiv.org/abs/1409.0473
- Material : Neural Machine Translation by Jointly Learning to Align and Translate_박희경.pdf
- Effective Approaches to Attention-based Neural Machine Translation
- Presenter : 김보섭
- Paper : https://arxiv.org/abs/1508.04025
- Material : Effective Approaches to Attention-based Neural Machine Translation_김보섭.pdf
- A Structured Self-attentive Sentence Embedding
- Presenter : 정미연
- Paper : https://arxiv.org/abs/1703.03130
- Material : A Structured Self-attentive Sentence Embedding_정미연.pdf
- Attention is All You Need
- Presenter : 이승재
- Paper : https://arxiv.org/abs/1706.03762
- Material : Attention is All You Need_이승재.pdf
- Show and Tell: A Neural Image Caption Generator
- Presenter : 김경환
- Paper : https://arxiv.org/abs/1411.4555
- Material : Show and Tell_A Neural Image Caption Generator_김경환.pdf
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
- Presenter : 이현준
- Paper : https://arxiv.org/abs/1502.03044
- Material : Show, Attend and Tell_Neural Image Caption Generation with Visual Attention_이현준.pdf
- Memory Networks
- Presenter : 이명재
- Paper : https://arxiv.org/abs/1410.3916
- Material : Memory Networks_이명재.pdf
- End-To-End Memory Networks
- Presenter : 조주현
- Paper : https://arxiv.org/abs/1503.08895
- Material : End-To-End Memory Networks_조주현.pdf
- Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
- Presenter : 김승일
- Paper : https://arxiv.org/abs/1506.07285
- Material : Ask Me Anything_Dynamic Memory Networks for Natural Language Processing_김승일.pdf
- Enriching Word Vectors with Subword Information
- Presenter : 김보섭
- Paper : https://arxiv.org/abs/1607.04606
- Material : Enriching Word Vectors with Subword Information_김보섭.pdf
졸업식
- Deep contextualized word representations
- Presenter : 김보섭
- Paper : https://arxiv.org/abs/1802.05365
- Material : Deep contextualized word representations_김보섭.pdf
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Presenter : 김동화
- Paper : https://arxiv.org/abs/1810.04805
- Material : BERT_Pre-training of Deep Bidirectional Transformers for Language Understanding_김동화.pdf