Tatsuki Kuribayashi (栗林樹生)

写真
My research interest lies in leveraging NLP techniques to understand humans and language. I especially focus on finding cognitive biases in human language processing behavior and language designs, analyzing NLP models from linguistic and/or neuro-symbolic perspectives, and developing a system to help our language activities (especially from the standpoint of a non-native English learner).

News

  • 2024/08: Workshop on Cognitive Modeling and Computational Linguistics (CMCL) 2024 will be held in ACL 2024, Thailand. Organizers: Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki. [cfp][web]
  • 2024/05: I will visit and give a talk at ETH Zürich.
  • 2024/03: I gave a talk about our NAACL paper in The Hong Kong Polytechnic University [slides]
  • 2024/03: I serve an action editor in ACL Rolling Review (Interpretability and Analysis of Models for NLP / Linguistic theories, Cognitive Modeling and Psycholinguistics).
  • 2024/02: Selected as a great reviewer. ACL Rolling Review (Linguistic theories, Cognitive Modeling and Psycholinguistics), 2023 October.
  • 2023/04: I join MBZUAI as a postdoctoral researcher.

Publications

Preprints

  • Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin.
    "Emergent Word Order Universals from Cognitively-Motivated Language Models"
    [arXiv]
  • Tatsuki Kuribayashi.
    "Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?"
    [arXiv] (pended due to affiliation change)

Refereed papers

  • Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin.
    "Psychometric Predictive Power of Large Language Models"
    Findings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024, Findings long), 2024/06.
    [arXiv]
  • Yukiko Ishizuki, Tatsuki Kuribayashi, Yuichiroh Matsubayashi, Ryohei Sasano and Kentaro Inui.
    "To Drop or Not to Drop? Predicting Argument Ellipsis Judgments: A Case Study in Japanese."
    In Proceedings of The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024, long), 2024/05.
    [paper (to appear)]
  • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui.
    "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Map."
    In Proceedings of 12th International Conference on Learning Representations (ICLR 2024, spotlight), 2024/05.
    [arXiv]
  • Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Hiroaki Funayama and Goro Kobayashi.
    "Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism."
    In Proceedings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-2023, main short), 2023/12.
    [paper | arXiv]
    • Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Hiroaki Funayama and Goro Kobayashi.
      "Assessing Chain-of-Thought Reasoning against Lexical Negation: A Case Study on Syllogism."
      In Proceedings of Student Research Workshop (SRW) at the 61st Annual Meeting of the Association for Computational Linguistics 2023 (ACL-SRW, Non-archival, best paper award), 2023/07.
  • Takumi Ito, Naomi Yamashita, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Ge Gao, Jack Jamieson and Kentaro Inui.
    "Use of an AI-powered Rewriting Support Software in Context with Other Tools: A Study of Non-Native English Speakers."
    The ACM Symposium on User Interface Software and Technology 2023 (UIST 2023), 2023/10.
    [paper]
  • Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe.
    "Second Language Acquisition of Neural Language Models."
    Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL-2023, Findings long), 2023/07. (acceptance rate: top 39.1%)
    [paper | arXiv]
  • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi and Kentaro Inui.
    "Transformer Language Models Handle Word Frequency in Prediction Head."
    Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL-2023, Findings short), 2023/07. (acceptance rate: top 39.1%)
    [paper | arXiv]
  • Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi and Kentaro Inui.
    "Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?"
    Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2023, main short), 2023/05. (acceptance rate: 281/1166=24.1%)
    [paper | arXiv]
  • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi and Kentaro Inui.
    "Empirical Investigation of Neural Symbolic Reasoning Strategies."
    Findings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2023, Findings short), 2023/05. (acceptance rate: top 482/1166=41.3%)
    [paper | arXiv]
    • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi and Kentaro Inui.
      "Empirical Investigation of Neural Symbolic Reasoning Strategies."
      Non-archival submission for the 2022 AACL-IJCNLP Student Research Workshop (AACL-IJCNLP SRW Non-archival, best paper award), 2022/11.
  • Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui.
    "Context Limitations Make Neural Language Models More Human-Like."
    In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP-2022, main long), pp.10421-10436, 2022/12. (acceptance rate: 829/4190=20%)
    [paper | arXiv]
  • Riki Fujihara, Tatsuki Kuribayashi, Kaori Abe, Ryoko Tokuhisa, Kentaro Inui.
    "Topicalization in Language Models: A Case Study on Japanese."
    In Proceedings of the 29th International Conference on Computational Linguistics (COLING-2022, long), pp.851-862, 2022/10 (acceptance rate: 522/1563=33.4%).
    [paper]
    • Riki Fujihara, Tatsuki Kuribayashi, Kaori Abe, Kentaro Inui.
      "Topicalization in Language Models: A Case Study on Japanese."
      In proceedings of Student Research Workshop at the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP-2021 SRW Non-archival), 2021/08 (acceptance rate: 39%).
  • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui.
    "Incorporating Residual and Normalization Layers into Analysis of Masked Language Models."
    In proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021, main long), pp. 4547-4568, 2021/11 (acceptance rate: 23.3%).
    [paper | arXiv | code]
  • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Masashi Yoshikawa, Kentaro Inui.
    "Instance-Based Neural Dependency Parsing."
    Transactions of the Association for Computational Linguistics 2021 (TACL 2021), 2021/09.
    [paper | arXiv]
  • Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui.
    "Lower Perplexity is Not Always Human-Like."
    In proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021, main long), pp. 5203-5217, 2021/08 (acceptance rate: 21.3%).
    [paper | code]
  • 栗林樹生, 大内啓樹, 井之上直也, 鈴木潤, Paul Reisert, 三好利昇, 乾健太郎
    「論述構造解析におけるスパン分散表現」
    自然言語処理 (domestic journal), Volume 27, Number 4, pp.753-780, December 2020.
    [paper | slides]
  • Takaki Otake, Sho Yokoi, Naoya Inoue, Ryo Takahashi, Tatsuki Kuribayashi, Kentaro Inui.
    "Modeling Event Salience in a Narrative Based on Barthes' Cardinal Function."
    In proceedings of the 28th International Conference on Computational Linguistics (COLING-2020, short), pp. 1784-1794, 2020/12 (acceptance rate: 26.2%).
    [paper | arXiv]
  • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui.
    "Attention is Not Only a Weight: Analyzing Transformers with Vector Norms."
    In proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP-2020, main long), pp. 7057-7075, 2020/11 (acceptance rate: 754/3359=22.4%).
    [paper | arXiv | code]
    • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui.
      "Self-Attention is Not Only a Weight: Analyzing BERT with Vector Norms."
      In proceedings of Student Research Workshop at the 58th Annual Meeting of the Association for Computational Linguistics (ACL-SRW-2020 Non-archival), 2020/07 (acceptance rate: 72/202=35.6%).
  • *Takumi Ito, *Tatsuki Kuribayashi, *Masatoshi Hidaka, Jun Suzuki, Kentaro Inui. (* equal contribution)
    "Langsmith: An Interactive Academic Text Revision System."
    In proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP-2020, demo), pp. 216-226, 2020/11 (acceptance rate: ???%).
    [paper | arXiv | demo]
  • Takumi Ito, Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui.
    "Assisting Authors to Convert Raw Products into Polished Prose."
    Journal of Cognitive Science, Vol.21, No.1, pp.99-135, 2020.
    [paper]
  • Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui.
    "Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese."
    In proceedings of the 58th annual meeting of the Association for Computational Linguistics (ACL-2020, main long), pp. 6452-6459, 2020/07 (acceptance rate: 779/3429=22.7%).
    [paper | updated version | data | presentation source]
  • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno, Kentaro Inui.
    "Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition."
    In proceedings of the 58th annual meeting of the Association for Computational Linguistics (ACL-2020, Short), pp. 6452-6459, 2020/07 (acceptance rate: 208/1185=17.6%).
    [paper | arXiv]
  • *Takumi Ito, *Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui. (* equal contribution)
    "Diamonds in the Rough: Generating Fluent Sentences from Early-stage Drafts for Academic Writing Assistance."
    In Proceedings of the 12th International Conference on Natural Language Generation (INLG-2019), pp. 40-53, 2019/10 (acceptance rate: 73/143=51.0%).
    [paper | arXiv | data]
  • Masato Hagiwara, Takumi Ito, Tatsuki Kuribayashi, Jun Suzuki, Kentaro Inui.
    "TEASPN: Framework and Protocol for Integrated Writing Assistance Environments."
    In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP 2019, demo), pp. 229-234, 2019/11 (acceptance rate: ???%).
    [paper| arXiv | code]
  • Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui.
    "An Empirical Study of Span Representations in Argumentation Structure Parsing."
    In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL-2019, short), pp. 4691-4698, 2019/07 (acceptance rate: 213/1163=18.2%).
    [paper | code]
  • Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui.
    "Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates."
    In Proceedings of the 5th Workshop on Argument Mining, pp.79-89, 2018/11 (acceptance rate: 18/32=56.3%).
    [paper | data]

Awards

Grant

  • 2023/04-2026/03 (suspended due to overseas affiliation): Grant-in-Aid for Early-Career Scientists, 科学研究費助成事業若手研究. 「人間らしい言語獲得効率に対するマルチモーダル言語処理を通した構成論的探求」
  • 2020/04-2022/03: Doctoral Course (DC) Research Fellowships, 日本学術振興会特別研究員(DC1) (面接免除内定, 54/277=19.5%). 「テクストの数理的モデリングと、数理モデルを通したテクストらしさの解明への挑戦」

Education/affiliation

  • 2023/04-: Postdoctoral research fellow in MBZUAI (Adviser: Prof. Timothy Baldwin).
  • 2022/04-2023/03: Specially-appointed research fellow in Tohoku NLP Group
  • 2020/04-2022/03: PhD student of Information Science, Graduate School of Information Sciences, Tohoku University, Miyagi, Japan. (Superviser: Prof. Kentaro Inui, thesis title: Exploring Cognitive Plausibility of Neural NLP Models: Cross-Linguistic and Discourse-Level Studies.)
    [thesis]
  • 2018/04-2020/03: Master's student of Information Science, Graduate School of Information Sciences, Tohoku University, Miyagi, Japan. (Superviser: Prof. Kentaro Inui)
    [thesis]
  • 2018/04-2022/03.: Graduate Program in Data Science, Tohoku University.
  • 2014/04-2018/03: Bachelor of Engineering, Department of Information and Intelligent Systems, Tohoku University, Miyagi, Japan. (Superviser: Prof. Kentaro Inui)

Activities/organizer

Talks/writings

  • 最先端NLP勉強会, 2023/08. [slides] [slides]
  • 自然言語処理による論文執筆支援. 東北大学(大学院改革推進センター主催), 2023/2. [slides]
  • Talk at MBZUAI. "Cognitive Plausibility of Neual Language Models," 2022/12. [slides]
  • 最先端NLP勉強会, 2022/09. [slides]
  • 最先端NLP勉強会, 2021/09. [slides]
  • 栗林樹生. Lower Perplexity is Not Always Human-Like (Talk). NLPコロキウム, 2021/06. [動画]
  • 栗林樹生. 学会記事「論述構造解析におけるスパン分散表現」の研究を通して. 自然言語処理 (学会記事No.27), Volume 28, Number 2, pp.677-681, 2021/06.
  • 栗林樹生. 学会記事 Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese. 自然言語処理 (学会記事), Volume 27, Number 3, pp.671-676, 2020/09.
  • 最先端NLP勉強会, 2020/09. [slides]

Tools

  • Langsmith Editor
    [system | usage]
  • TEASPN (Text Editing Assistance Smartness Protocol for Natural Language)
    [code | document]

Reviewer

    Interpretability and Analysis of Models for NLP / Linguistic theories, Cognitive Modeling and Psycholinguistics
  • ACL Rolling Review. Reviewer and Action Editor ()
  • ACL: 2019 (secondary), 2020 (secondary), 2021 (Sentiment Analysis, Stylistic Analysis, and Argument Mining), 2023 (Linguistic Theories, Cognitive Modeling and Psycholinguistics.)
  • EMNLP: 2021 (Linguistic Theories, Cognitive Modeling and Psycholinguistics. Sentiment Analysis, Stylistic Analysis, and Argument Mining), 2022 (Linguistic Theories, Cognitive Modeling and Psycholinguistics).
  • COLING: 2020, 2022, 2024 (Language Modeling. Integrated Systems and Applications).
  • LREC: 2022, 2024 (Language Resources and Evaluation for Psycholinguistics, Cognitive Linguistics and Linguistic Theories. Natural Language Generation (including Summarization)).
  • INLG: 2020, 2021.
  • 言語処理学会年次大会 (domestic conference), 学会誌自然言語処理.

Domestic conferences

  • 栗林樹生, 大関洋平, Timothy Baldwin. 大規模言語モデルの文処理は人間らしいのか?. 言語処理学会第30回年次大会,4pages,2024/3.
  • 栗林樹生, 上田亮, 吉田遼, 大関洋平, Ted Briscoe, Timothy Baldwin. どのような言語モデルが不可能な言語を学習してしまうのか?---語順普遍を例に---. 言語処理学会第30回年次大会,4pages,2024/3.
  • 青木洋一, 工藤慧音, 曾根周作, 栗林樹生, 谷口雅弥, 坂口慶祐, 乾健太郎. 言語モデルの思考連鎖的推論における探索戦略の動的変化. 言語処理学会第30回年次大会,4pages,2024/3.
  • 工藤慧音, 青木洋一, 栗林樹生, 谷口雅弥, 曾根周作, 坂口慶祐, 乾健太郎. 算術推論問題における自己回帰型言語モデルの内部機序. 言語処理学会第30回年次大会,4pages,2024/3.
  • 葉夢宇, 栗林樹生, 小林悟郎, 鈴木潤. 文脈内学習における文脈内事例の寄与度推定. 言語処理学会第30回年次大会,4pages,2024/3.
  • 都地悠馬, 高橋惇, 横井祥, 栗林樹生, 上田亮, 宮原英之. 長距離相互作用する文脈依存言語における相転移現象 -言語モデルの創発現象を統計力学の視点で理解する-. 言語処理学会第30回年次大会,4pages,2024/3.
  • 栗林樹生. 百聞は一見に如かず?視覚情報は言語モデルに文の階層構造を教示するか. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. Transformer言語モデルの予測ヘッド内バイアスによる頻度補正効果. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 葉夢宇, 栗林樹生, 舟山弘晃, 鈴木潤. 思考連鎖指示における大規模言語モデルの否定表現理解. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 藤原吏生, 栗林樹生, 徳久良子, 乾健太郎. 主題化における人間と言語モデルの対照. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 石月由紀子, 栗林樹生, 松林優一郎, 笹野遼平, 乾健太郎. 日本語話者の項省略判断に関するアノテーションとモデリング. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 大羽未悠, 栗林樹生, 大内啓樹, 渡辺太郎. 言語モデルの第二言語獲得. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 工藤慧音, 青木洋一, 栗林樹生, Ana Brassard, 吉川将司, 坂口慶祐, 乾健太郎. 算術問題におけるニューラルモデルの構成的推論能力. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 青木洋一, 工藤慧音, Ana Brassard, 栗林樹生, 吉川将司, 坂口慶祐, 乾健太郎. ニューラル記号推論における推論過程の教示方法. 言語処理学会第29回年次大会, 4pages, 2023/3.
  • 大羽未悠, 栗林樹生, 大内啓樹, 渡辺太郎. 言語モデルの第二言語獲得効率. 第254回自然言語処理研究会, 6pages, 2022/11.
  • 青木洋一, 工藤慧音, 栗林樹生, Ana Brassard, 吉川将司, 乾健太郎. 推論過程の性質がニューラルネットの多段推論能力に与える影響. 第17回NLP若手の会 シンポジウム (YANS), 2022/08.
  • 栗林樹生. 視覚情報は言語モデルに人間らしい統語的汎化を促すか. 第17回NLP若手の会 シンポジウム (YANS), 2022/08.
  • 大羽未悠, 栗林樹生, 大内啓樹, 渡辺太郎. 言語モデルの第二言語獲得効率. 第17回NLP若手の会 シンポジウム (YANS), 2022/08.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. Transformerにおけるフィードフォワードネットの混ぜ合わせ作用. 第17回NLP若手の会 シンポジウム (YANS), 2022/08.
  • 栗林樹生, 大関洋平, Ana Brassard, 乾健太郎. ニューラル言語モデルの過剰な作業記憶. 言語処理学会第28回年次大会, pp.1530-1535, 2022/03.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. Transformerにおけるフィードフォワードネットの作用. 言語処理学会第28回年次大会, pp.1072-1077, 2022/03.
  • 石月由紀子, 栗林樹生, 松林優一郎, 大関洋平. 情報量に基づく日本語項省略の分析. 言語処理学会第28回年次大会, pp.442-447, 2022/03.
  • 青木洋一, 工藤慧音, Ana Brassard, 栗林樹生, 吉川将司, 乾健太郎. 多段の数量推論タスクに対する適応的なモデルの振る舞いの検証. 言語処理学会第28回年次大会, pp.168-172, 2022/03.
  • 石月由紀子, 栗林樹生, 松林優一郎, 大関洋平 情報量に基づく日本語項省略の分析. NLP若手の会 (YANS) 第16回シンポジウム, 2021/08.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. 非線形モジュールの加法分解に基づくTransformerレイヤーの解析. NLP若手の会 (YANS) 第16回シンポジウム, 2021/08.
  • 栗林樹生, 大関洋平, 伊藤拓海, 吉田遼, 浅原正幸, 乾健太郎. 予測の正確な言語モデルがヒトらしいとは限らない. 言語処理学会第27回年次大会, pp.267-272, 2021/03.
  • 栗林樹生, 大関洋平, 伊藤拓海, 吉田遼, 浅原正幸, 乾健太郎. 日本語の読みやすさに対する情報量に基づいた統一的な解釈. 言語処理学会第27回年次大会, pp.723-728, 2021/03.
  • 伊藤拓海, 栗林樹生, 日高雅俊, 鈴木潤, 乾健太郎. Langsmith: 人とシステムの協働による論文執筆. 言語処理学会第27回年次大会, pp.1834-1839, 2021/03.
  • 藤原吏生, 栗林樹生, 乾健太郎. 人と言語モデルが捉える文の主題. 言語処理学会第27回年次大会, pp.1307-1312, 2021/03.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. Transformerの文脈を混ぜる作用と混ぜない作用. 言語処理学会第27回年次大会, pp.1224-1229, 2021/03.
  • 大内啓樹, 鈴木潤, 小林颯介, 横井祥, 栗林樹生, 吉川将司, 乾健太郎. 事例ベース依存構造解析のための依存関係表現学習. 言語処理学会第27回年次大会, pp.497-502, 2021/03.
  • 大竹孝樹, 横井祥, 井之上直也, 高橋諒, 栗林樹生, 乾健太郎. 物語におけるイベントの顕現性推定と物語類似性計算への応用. 言語処理学会第27回年次大会, pp.1324-1329, 2021/03.
  • 藤原吏生, 栗林樹生, 乾健太郎. 日本語言語モデルが選択する文形式の傾向と文脈の影響 ー 主題化・有標語順について. NLP若手の会 (YANS) 第15回シンポジウム, 2020/09.
  • 小林悟郎, 栗林樹生, 横井祥, 乾健太郎. ベクトル長に基づく注意機構と残差結合の包括的な分析. NLP若手の会 (YANS) 第15回シンポジウム, 2020/09.
  • 栗林樹生, 伊藤拓海, 鈴木潤, 乾健太郎. 日本語語順分析に言語モデルを用いることの妥当性について. 言語処理学会第26回年次大会, pp.493-496, 2020/03.
    [slides]
  • 小林 悟郎, 栗林樹生, 横井 祥, 鈴木潤, 乾健太郎. ベクトル⻑に基づく自己注意機構の解析. 言語処理学会第26回年次大会, pp.965-968, 2020/03.
  • 大内 啓樹, 鈴木 潤, 小林 颯介, 横井 祥, 栗林樹生, 乾健太郎. スパン間の類似性に基づく事例ベース構造予測. 言語処理学会第26回年次大会, pp.331-334, 2020/03.
  • 大竹 孝樹, 横井 祥, 井之上 直也, 高橋 諒, 栗林樹生, 乾健太郎. 言語モデルによる物語中のイベントの顕現性推定. 言語処理学会第26回年次大会, pp.1089-1092, 2020/03.
  • 伊藤拓海, 栗林樹生, 萩原正人, 鈴木潤, 乾健太郎. 英語論文執筆のための統合ライティング支援環境. 第14回NLP若手の会 シンポジウム (YANS), 2019/08.
  • 小林悟郎, 栗林樹生, 横井祥, 鈴木潤, 乾健太郎. 文脈を考慮する言語モデルが捉える品詞情報とその軌跡. 第14回NLP若手の会 シンポジウム (YANS), 2019/08.
  • 栗林樹生, 大内啓樹, 井之直也, Paul Reisert, 三好利昇, 鈴木潤, 乾健太郎. 複数の言語単位に対するスパン表現を用いた論述構造解析. 言語処理学会第25回年次大会, pp.990-993, 2019/03.
  • *栗林樹生, *伊藤拓海, 内山香, 鈴木潤, 乾健太郎. (* 第一著者と第二著者の貢献度は等しい.) 言語モデルを用いた日本語の語順評価と基本語順の分析. 言語処理学会第25回年次大会, pp.1053-1056, 2019/03.
  • *伊藤拓海, *栗林樹生, 小林隼人, 鈴木潤, 乾健太郎. (* 第一著者と第二著者の貢献度は等しい.) ライティング支援を想定した情報補完型生成. 言語処理学会第25回年次大会, pp.970-973, 2019/03.
  • Tatsuki Kuribayashi, Paul Reisert, Naoya Inoue and Kentaro Inui. Towards Exploiting Argumentative Context for Argumentative Relation Identification. 言語処理学会第24回年次大会, pp.284-287, 2018/03.
  • Tatsuki Kuribayashi, Paul Reisert, Naoya Inoue, Kentaro Inui. Examining Macro-level Argumentative Structure Features for Argumentative Relation Identification. 第4回自然言語処理シンポジウム・第234回自然言語処理研究会, 6pages, 2017/12.

Teaching Assistant

  • Oct.2018-Mar.2019: Advanced Creative Engineering Training (Step-Qi school)

Skills

  • Python (6 years)
  • R (a little)
  • TypeScript, React (a litte...)
  • Tools in NLP/ML research: git, Docker, GCP, pytorch, w&b, TeX...
  • Designing API specifications for NLP models (working as a subcontractor)
  • Developing evaluation systems for NLP competition (working as a subcontractor)

Hobbies

  • Playing the saxophone (around 10 years)
  • Skiing