Abstract:
In recent years, object detection has shown great performance with large quantities of labeled data, but when a domain discrepancy occurs between the real test data and the labeled data, the performance of a trained object detection model often decreases. Compared with natural images, multi-source remote sensing images have unique discrepancies in imaging methods and resolutions. Traditional methods needed to re-label multi-source images, which spent lots of manpower and time. Therefore, it faced unique challenges to implement adaptive object detection for remote sensing images. In view of the above problems, this paper proposed an adaptive object detection algorithm for multi-source remote sensing images, which conducted adversarial training at the image level and semantic level. In addition, by combining super-resolution networks, we further alleviated the discrepancy at the image level and realized adaptive object detection. We conducted experiments on two multi-source remote sensing image datasets, and the results show that the proposed method effectively improves detection performance on the target domain.