ryokamoi commited on
Commit
dbcdeb0
Β·
verified Β·
1 Parent(s): 39c6c69

Updated README.md to announce v1.1

Browse files
Files changed (1) hide show
  1. README.md +27 -36
README.md CHANGED
@@ -111,19 +111,23 @@ configs:
111
  - split: charts__intersection
112
  path: data/charts__intersection-*
113
  ---
 
 
 
114
  # VisOnlyQA
115
 
116
- This repository contains the code and data for the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
 
 
 
 
117
 
118
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
119
 
120
  * Datasets:
121
- * VisOnlyQA is available at [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) πŸ”₯πŸ”₯πŸ”₯
122
- * VisOnlyQA in VLMEvalKit is different from the original one. Refer to [this section](#vlmevalkit) for details.
123
- * Hugging Face
124
- * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
125
- * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
126
- * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
127
  * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
128
 
129
  <p align="center">
@@ -135,41 +139,20 @@ VisOnlyQA is designed to evaluate the visual perception capability of large visi
135
  title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
136
  author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
137
  year={2024},
138
- journal={arXiv preprint arXiv:2412.00947}
139
  }
140
  ```
141
 
142
- ## Dataset
143
-
144
- VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different.
145
-
146
- ### Examples
147
-
148
- <p align="center">
149
- <img src="readme_figures/examples.png" width="800">
150
- </p>
151
-
152
- ### VLMEvalKit
153
 
154
- [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit.
 
155
 
156
- The major differences are:
157
-
158
- * VisOnlyQA on VLMEvalKit does not include the `chemistry__shape_multi` split
159
- * VLMEvalKit uses different prompts and postprocessing.
160
-
161
- Refer to [this document](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B).
162
-
163
- ```bash
164
- python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B
165
- ```
166
-
167
- ### Hugging Face Dataset
168
 
169
- The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository.
170
 
171
- * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
172
- * 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
173
  * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
174
  * 700 instances for questions on synthetic figures
175
  * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
@@ -177,10 +160,18 @@ The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want
177
 
178
  [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
179
 
 
 
 
 
 
 
 
 
180
  ```python
181
  from datasets import load_dataset
182
 
183
- real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real")
184
  real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic")
185
 
186
  # Splits
 
111
  - split: charts__intersection
112
  path: data/charts__intersection-*
113
  ---
114
+ <p align="center" style="color:violet;">A newer version of this dataset is available.<br>
115
+ <a href="https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1">https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1</a></p>
116
+
117
  # VisOnlyQA
118
 
119
+ <p align="center">
120
+ 🌐 <a href="https://visonlyqa.github.io/">Project Website</a> | πŸ“„ <a href="https://arxiv.org/abs/2412.00947">Paper</a> | πŸ€— <a href="https://huggingface.co/collections/ryokamoi/visonlyqa-674e86c7ec384b629bb97bc3">Dataset</a> | πŸ”₯ <a href="https://github.com/open-compass/VLMEvalKit">VLMEvalKit</a>
121
+ </p>
122
+
123
+ This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information".
124
 
125
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
126
 
127
  * Datasets:
128
+ * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1)
129
+ * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
130
+ * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
 
 
 
131
  * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
132
 
133
  <p align="center">
 
139
  title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
140
  author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
141
  year={2024},
 
142
  }
143
  ```
144
 
145
+ ## Update
 
 
 
 
 
 
 
 
 
 
146
 
147
+ * v1.1
148
+ * Increased the number of instances in the Real split.
149
 
150
+ ## Dataset
 
 
 
 
 
 
 
 
 
 
 
151
 
152
+ The dataset is provided in Hugging Face Dataset.
153
 
154
+ * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1)
155
+ * 900 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
156
  * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
157
  * 700 instances for questions on synthetic figures
158
  * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
 
160
 
161
  [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
162
 
163
+ ### Examples
164
+
165
+ <p align="center">
166
+ <img src="readme_figures/examples.png" width="800">
167
+ </p>
168
+
169
+ ### Usage
170
+
171
  ```python
172
  from datasets import load_dataset
173
 
174
+ real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real_v1.1")
175
  real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic")
176
 
177
  # Splits