Computer Science PhD Student

Bar Ilan University

Hello! I am a third year PhD student in the Natural Language Processing Lab at Bar-Ilan University, supervised by prof. Yoav Goldberg and an intern at Google Research.

I’m broadly interested in interpretability and learned representations with a focus in multimodality. Currently, I’m interested in understanding the diffusion process and the effect of the training data on the representations and mechanisms models learn, and how interpreting them can increase the control we have over the final outputs.

Previously, I obtained my Masters in Computer Science and was jointly supervised by Yoav Goldberg and Reut Tsarfaty. Then, I was fascinated with underspecified language and sought to understand: Why do models often misinterpret ambiguous language, and we don’t? How can we make implicit information explicit? My thesis aimed to answer these questions in the context of verbal omissions in coordination structures. During that time, I stumbled upon an intriguing behavior in DALL-E (which rings true for all text-to-image models): It doesn’t follow the principle that each word has a single role in a sentence. We detailed this behavior in a short paper, which was featured in The Guardian!

My CV is available here.

Education

  • PhD in Computer Science (in-progress)

    Bar Ilan University

  • MSc in Computer Science

    Bar Ilan University

Recent Publications

GRADE: Quantifying Sample Diversity in Text-to-Image Models

We introduce GRADE, a method for assessing the output diversity of images generated by text-to-image models. Using LLMs and Visual-QA systems, GRADE quantifies diversity across concept-specific attributes by estimating attribute distributions and calculating normalized entropy.
GRADE: Quantifying Sample Diversity in Text-to-Image Models

Evaluating D-MERIT of Partial-annotation on Information Retrieval

This work curates D-MERIT, a passage retrieval evaluation set from Wikipedia, aspiring to contain all relevant passages for each query, and proposes it as a resource for evaluation and a recommendation for balance between resource-efficiency and reliable evaluation when annotating evaluation sets for text retrieval.
Evaluating D-MERIT of Partial-annotation on Information Retrieval

Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment

We propose SynGen, an inference-time method which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax.
Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment

Conjunct Resolution in The Face of Verbal Omissions

This work establishes a pragmatic framework for understanding verbal omissions in VP coordination structures, devises a scalable data collection method, and curates a large dataset with over 10,000 natural examples and crowd-sourced solutions. We show current neural baseline models demonstrate moderate success in resolving these omissions, with ample room for improvement.
Conjunct Resolution in The Face of Verbal Omissions

DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image Models

We point out two surprising flaws in the way text2image models map words to visual concepts. For instance, we demonstrate a semantic leakage between different words in the prompt, and cases where words with multiple meanings are depicted with all their meanings at once.
DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image Models