Abstract. Vision-Language Models (VLMs) learn a shared feature space for text and images, enabling the comparison of inputs of different modalities. While prior works demonstrated that VLMs organize ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results