Point2RBox-v3

1Shanghai Jiao Tong University 2South China University of Technology 3Nankai University
4The Chinese University of Hong Kong 5Nanjing University of Aeronautics and Astronautics
6East China Normal University 7Ohio State University

*Indicates Equal Contribution

Abstract

Driven by the growing need for Oriented Object Detection (OOD), learning from point annotations under a weakly-supervised framework has emerged as a promis ing alternative to costly and laborious manual labeling. In this paper, we discuss two deficiencies in existing point-supervised methods: inefficient utilization and poor quality of pseudo labels. Therefore, we present Point2RBox-v3. At the core are two principles: 1) Progressive Label Assignment (PLA). It dynamically es timates instance sizes in a coarse yet intelligent manner at different stages of the training process, enabling the use of label assignment methods. 2) Prior-Guided Dynamic Mask Loss (PGDM-Loss). It is an enhancement of the Voronoi Wa tershed Loss from Point2RBox-v2, which overcomes the shortcomings of Water shed in its poor performance in sparse scenes and SAM’s poor performance in dense scenes. To our knowledge, Point2RBox-v3 is the first model to employ dynamic pseudo labels for label assignment, and it creatively complements the advantages of SAM model with the watershed algorithm, which achieves excel lent performance in both sparse and dense scenes. Our solution gives competitive performance, especially in scenarios with large variations in object size or sparse object occurrences: 66.09% 56.86% 41.28% 46.40% 19.60% 45.96% on DOTA v1.0/DOTA-v1.5/DOTA-v2.0/DIOR/STAR/RSAR.

Point2RBox-v3 visual comparison and performance radar chart

Visual comparisons with the state-of-the-art method Point2RBox-v2. Radar plot comparing the performance of our method with 10 other state-of-the-art methods across 6 benchmark datasets.

@article{zhang2025point2rbox,
  title={Point2RBox-v3: Self-Bootstrapping from Point Annotations via Integrated Pseudo-Label Refinement and Utilization},
  author={Zhang, Teng and Fan, Ziqian and Liu, Mingxin and Zhang, Xin and Lu, Xudong and Li, Wentong and Zhou, Yue and Yu, Yi and Li, Xiang and Yan, Junchi and others},
  journal={arXiv preprint arXiv:2509.26281},
  year={2025}
}