Advance Deep Learning/KUBIG Advanced ML&DL

[Advanced ML & DL Week4] BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding

needmorecaffeine 2022. 12. 31. 12:19

논문 링크 : BERT, Pre-training of Deep Bidirectional Transformers for Language Understanding

 

팀 블로그 작성 글 링크 : https://kubig-2022-2.tistory.com/86

 

[Advanced ML & DL Week4] BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding

작성자 : 14기 김태영 논문 링크 : BERT, Pre-training of Deep Bidirectional Transformers for Language Understanding 1. Introduction BERT가 등장하기 이전의 SOTA 모델로는 GPT-1이 있었고 이는 많은 양의 데이터로 학습된 pre

kubig-2022-2.tistory.com