🏠홈📊트렌드🏆논문👤마이
🎓
CS-Pedia
TrendsBest Papers
CS Conference Hub

Deadlines, Rankings
& Best Papers — all in one.

Acceptance rates · Best papers · Conference deadlines for CS researchers

← 학회 목록으로

ACL

NLP마감

Annual Meeting of the Association for Computational Linguistics

📍 San Diego, CA, USA · 2026-07-02 ~ 2026-07-07

공식 웹사이트DBLP정보 오류 신고
채택률22%(2024, 1,060/4,822)
키워드LLMMultimodalBenchmarkLanguage ModelEfficient

기관 인정 현황?

BK21
4점
KIISE
최우수
KAIST
최우수
SNU
✓
POST
최우수

데드라인

2026 (main)
📍 San Diego, CA, USA
📝 Paper: 2026-01-05
📬 Notification: 2026-04-04
📅 학회: 2026-07-02 ~ 2026-07-07

데드라인은 변경될 수 있습니다. 제출 전 공식 웹사이트에서 최종 일정을 확인하세요.

Acceptance Rate?

연도제출채택채택률
20223,37860417.9%
20203,42977922.7%
20213,35057117%
20234,56691019.9%
20244,8221,06022%

Research Trend?

Best Papers

🏆 2025Best Paper Award
A Theory of Response Sampling in LLMs: Part Descriptive and Part Prescriptive
Sivaprasad, Kaushik, Abdelnabi, Fritz
LLMTheory
🏆 2025Best Paper Award
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Wang, Phan, Ho, Koyejo
FairnessLLM
🏆 2025Best Paper Award
Language Models Resist Alignment: Evidence From Data Compression
Ji, Wang, Qiu, Chen, Zhou, Li, Lou, Dai, Liu, Yang
LLMAlignment
🏆 2025Best Paper Award
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Yuan, Gao, Dai, Luo, Zhao, Zhang, Xie, Wei, Wang, Xiao, Wang, Ruan, Zhang, Liang, Zeng
AttentionEfficiency
🏆 2024Best Paper Award
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
Üstün et al.
MultilingualInstruction Tuning
🏆 2024Best Paper Award
Mission: Impossible Language Models
Kallini et al.
Language ModelsTheory
🏆 2023Best Paper Award
Do Androids Laugh at Electric Sheep? Humor Understanding Benchmarks from The New Yorker Caption Contest
Hessel et al.
HumorBenchmark
🏆 2023Best Paper Award
What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin
🏆 2023Best Paper Award
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Shangbin Feng, Chan Young Park, Yuhan Liu, Yulia Tsvetkov
Object Tracking
🏆 2022Best Paper Award
Learned Incremental Representations for Parsing
Kitaev et al.
ParsingRepresentations
🏆 2022Best Paper Award
Requirements and Motivations of Low-Resource Speech Synthesis for Language Revitalization
Aidan Pine, Dan Wells, Nathan Brinklow, Patrick William Littell, Korin Richmond
Speech SynthesisSynthesis
🏆 2022Best Paper Award
DiBiMT: A Novel Benchmark for Measuring Word Sense Disambiguation Biases in Machine Translation
Niccolò Campolungo, Federico Martelli, Francesco Saina, Roberto Navigli
Machine TranslationBenchmarkingWord Sense Disambiguation
🏆 2022Best Paper Award
KinyaBERT: a Morphology-aware Kinyarwanda Language Model
Antoine Nzeyimana, Andre Niyongabo Rubung
Language Models
🏆 2021Best Paper Award
Vocabulary Learning via Optimal Transport for Neural Machine Translation
Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, Lei Li
NLPmachine translationoptimal transport
🏆 2020Best Paper Award
Beyond Accuracy: Behavioral Testing of NLP models with Checklist
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh

CS-Pedia — 한국 CS 연구자를 위한 학회 통합 플랫폼

데이터 출처: DBLP (CC0), OpenAlex (CC0), Semantic Scholar, aideadlin.es (MIT), 한국연구재단, 한국정보과학회, jeffhuang.com

제출 전 데드라인·학회 일정은 공식 웹사이트에서 반드시 확인하세요.

소개·개인정보 처리방침·이용약관·문의