Vision language models (VLMs) are expected to perform effective multimodal reasoning and make logically coherent decisions, which is critical to tasks such as diagram understanding and spatial problem solving. However, current VLM reasoning lacks large-scale and well-structured training datasets. To bridge this gap, we propose VisualSphinx, a first-of-its-kind large-scale synthetic visual logical reasoning training data. To tackle the challenge of image synthesis with grounding answers, we propose a rule-to-image synthesis pipeline, which extracts and expands puzzle rules from seed questions and generates the code of grounding synthesis image synthesis for puzzle sample assembly. Experiments demonstrate that VLM trained using GRPO on VisualSphinx benefit from logical coherence and readability of our dataset and exhibit improved performance on logical reasoning tasks. The enhanced reasoning capabilities developed from VisualSphinx also benefit other reasoning tasks such as algebraic reasoning, arithmetic reasoning and geometry reasoning.
This figure demonstrates the VisualSphinx-V1 instances within each reasoning category. Each visual logic puzzle comprises a text prompt, a graphical question stem with four images and a question mark, and four candidate choices of graphical answers.
This figure illustrates the four-stage pipeline for generating VisualSphinx-V1. In Step 1, we collect 4K seed puzzles with explanations and abstract them into structured rule descriptions using LLMs. In Step 2, we apply a rule-level genetic algorithm to cross over, mutate and diversify the seed rules, scaling them to 40K high-quality rules. In Step 3, each rule is paired with a rendering style and used to generate five correct and three incorrect images via LLM-generated Python scripts. The fifth correct image is designated as the answer option, while the three rule-breaking images serve as distractors. After deduplication, we obtain 110K image groups. In Step 4, we assemble puzzles from each group using three complementary strategies: default assembly, shuffled answer variants, and expanded distractor sets. This results in over 660K visual logic puzzles, enabling robust and diverse training for multimodal reasoning models.
This figure shows the statistics of readability, logical coherence and pass rates of VisualSphinx-V1.
This figure illustrates the performance of the model trained on VisualSphinx-V1, evaluated on the VisualSphinx-TEST across varying difficulty levels, and compares it with other models.
@misc{feng2025visualsphinx,
title={VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL},
author={Yichen Feng and Zhangchen Xu and Fengqing Jiang and Yuetai Li and Bhaskar Ramasubramanian and Luyao Niu and Bill
Yuchen Lin and Radha Poovendran},
year={2025},
eprint={2505.23977},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.23977},
}