2025

Don't Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning
Michael Hassid, Gabriel Synnaeve, Yossi Adi, Roy Schwartz
Preprint
Arxiv

Scaling Analysis of Interleaved Speech-Text Language Models
Gallil Maimon, Michael Hassid, Amit Roth, Yossi Adi
Preprint
CODE Project Page Arxiv

More Documents, Same Length: Isolating the Challenge of Multiple Documents in RAG
Shahar Levy, Nir Mazor, Lihi Shalmon, Michael Hassid, Gabriel Stanovsky
Preprint
CODE Arxiv

On Pruning State-Space LLMs
Tamer Ghattas, Michael Hassid, Roy Schwartz
Preprint
CODE Arxiv

2024


The Larger the Better? Improved LLM Code-Generation via Budget Reallocation
Michael Hassid*, Tal Remez*, Jonas Gehring, Roy Schwartz, Yossi Adi
COLM 2024
Project Page Arxiv

Transformers are Multi-State RNNs
Matanel Oren*, Michael Hassid*, Nir Yarden, Yossi Adi, Roy Schwartz
EMNLP 2024
CODE Arxiv

2023


EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
Tu Anh Nguyen, ..., Michael Hassid, Felix Kreuk, Yossi Adi, Emmanuel Dupoux
Interspeech 2023
Project Page Arxiv

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
Daniel Rotem, Michael Hassid, Jonathan Mamou, Roy Schwartz
ACL 2023
CODE Arxiv

Textually Pretrained Speech Language Models
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, Yossi Adi
NeurIPS 2023
CODE Project Page Arxiv

2022


Efficient Methods for Natural Language Processing: A Survey
Marcos Treviso, Ji-Ung Lee, Tianchu Ji, ..., Michael Hassid, ..., Roy Schwartz
TACL 2023
Arxiv

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A Smith, Roy Schwartz
Findings of EMNLP 2022
CODE Arxiv

2021


More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Michael Hassid, Michelle Tadmor Ramanovich, Brendan Shillingford, Miaosen Wang, Ye Jia, Tal Remez
CVPR 2022
Project Page Arxiv