Hello! I am a third year PhD student in the Natural Language Processing Lab at Bar-Ilan University, supervised by prof. Yoav Goldberg and an intern at Google Research.
I’m broadly interested in interpretability and learned representations with a focus in multimodality. Currently, I’m interested in understanding the diffusion process and the effect of the training data on the representations and mechanisms models learn, and how interpreting them can increase the control we have over the final outputs.
Previously, I obtained my Masters in Computer Science and was jointly supervised by Yoav Goldberg and Reut Tsarfaty. Then, I was fascinated with underspecified language and sought to understand: Why do models often misinterpret ambiguous language, and we don’t? How can we make implicit information explicit? My thesis aimed to answer these questions in the context of verbal omissions in coordination structures. During that time, I stumbled upon an intriguing behavior in DALL-E (which rings true for all text-to-image models): It doesn’t follow the principle that each word has a single role in a sentence. We detailed this behavior in a short paper, which was featured in The Guardian!
My CV is available here.
PhD in Computer Science (in-progress)
Bar Ilan University
MSc in Computer Science
Bar Ilan University