(a) Vision-Language-Action Models (VLAs) often suffer from counterfactual failures due to vision shortcuts, defaulting to well-learned scene-specific behaviors instead of faithfully following instructions. (b) We study this issue and introduce LIBERO-CF, the first counterfactual benchmark for evaluating language following in VLAs. (c) We propose Counterfactual Action Guidance (CAG), a dual-branch inference scheme that mitigates counterfactual failures in VLAs. (d) Extensive experiments in both simulation and real-world experiments demonstrate the effectiveness of CAG across diverse VLAs.
Vision-Language-Action models (VLAs) promise to ground language instructions in robot control, yet in practice often fail to faithfully follow language. When presented with instructions that lack strong scene-specific supervision, VLAs suffer from counterfactual failures: they act based on vision shortcuts induced by dataset biases, repeatedly executing well-learned behaviors and selecting objects frequently seen during training regardless of language intent.
To systematically study it, we introduce LIBERO-CF, the first counterfactual benchmark for VLAs that evaluates language following capability by assigning alternative instructions under visually plausible LIBERO layouts. Our evaluation reveals that counterfactual failures are prevalent yet underexplored across state-of-the-art VLAs.
We propose Counterfactual Action Guidance (CAG), a simple yet effective dual-branch inference scheme that explicitly regularizes language conditioning in VLAs. CAG combines a standard VLA policy with a language-unconditioned Vision-Action (VA) module, enabling counterfactual comparison during action selection. This design reduces reliance on visual shortcuts, improves robustness on under-observed tasks, and requires neither additional demonstrations nor modifications to existing architectures or pretrained models.
Extensive experiments demonstrate its plug-and-play integration across diverse VLAs and consistent improvements. For example, on LIBERO-CF, CAG improves π0.5 by 9.7% in language following accuracy and 3.6% in task success on under-observed tasks using a training-free strategy, with further gains of 15.5% and 8.5%, respectively, when paired with a VA model. In real-world evaluations, CAG reduces counterfactual failures of 9.4% and improves task success by 17.2% on average.
In modality ablation, all VLAs preserve high performance even when only vision is provided, while performance collapses to near zero when only language is given, indicating a strong reliance on visual cues.
(a) We visualize the distribution of grasp positions from 50 trials as heatmaps under different instructions. Even when given counterfactual or empty instructions, VLAs tend to execute the well-learned training task in the scene. (b) Removing the training-task object in the scene improves the success rates of VLAs on counterfactual instructions.
We propose Counterfactual Action Guidance (CAG), a simple yet effective dual-branch inference scheme that explicitly regularizes language conditioning in VLAs. CAG combines a standard VLA policy with a language-unconditioned Vision-Action module, reducing reliance on visual shortcuts. This improves robustness on under-observed tasks, and requires no additional demonstrations and no changes to existing architectures or pretrained models.
Both our training-free and vision-action prior strategies improve the performance across state-of-the-art VLAs.
We study multiple aspects of language grounding in real-world evaluations, including object recognition, spatial reasoning, goal execution, out-of-distribution generalization, and long-horizon reasoning. CAG consistently reduces counterfactual failures and improves task success across all scenes.
Pick up the fanta
π0.5
Ours
Pick up the mustard
π0.5
Ours
Pick up the cup on the right
π0.5
Ours
Pick up the corn can in the bowl
π0.5
Ours
Put the cup on the plate
π0.5
Ours
Put the cup in the basket
π0.5
Ours
Pick up the Rubik's Cube
π0.5
Ours
Pick up the basketball
π0.5
Ours
Move the cup to the right and pour the fanta into the cup
π0.5
Ours
Put the banana on the tray and then put the apple on the tray
π0.5
Ours
@article{fang2026when,
title={When Vision Overrides Language: Evaluating and Mitigating Counterfactual Failures in VLAs},
author={Fang, Yu and Feng, Yuchun and Jing, Dong and Liu, Jiaqi and Yang, Yue and Wei, Zhenyu and Szafir, Daniel and Ding, Mingyu},
journal={arXiv preprint arXiv:2602.17659},
year={2026}
}