METHODOLOGICAL APPROACHES TO TREE COVER DETECTION AND EVALUATION IN ARID LANDSCAPES
PDF
DOI

Keywords

desert ecosystems,
UAV imagery,
tree cover segmentation
Random Forest
U-Net,
semantic segmentation,
remote sensing
ecological monitoring.

How to Cite

Kuandikova, G., Kuchkorov, T., & Mamataliyev, A. (2026). METHODOLOGICAL APPROACHES TO TREE COVER DETECTION AND EVALUATION IN ARID LANDSCAPES. Journal of Research and Development, 3(2), 52–62. Retrieved from https://imfaktor.com/tjrd/article/view/2081

Abstract

This study addresses the problem of automatic tree cover segmentation in desert environments using high-resolution UAV imagery from the Suhaitu Gacha region (Inner Mongolia, China). Two modeling approaches–a statistical ensemble method (Random Forest, RF) and a deep learning-based semantic segmentation model (U-Net)–were comparatively evaluated. The experimental framework incorporated pixel-level classification accuracy, contour delineation quality, detection of small vegetation structures, and overall segmentation stability. Quantitative assessment was conducted using standard metrics, including mean Intersection over Union (mIoU) and the Kappa coefficient. The results demonstrate that the U-Net model consistently outperforms RF, particularly in complex desert landscapes characterized by low spectral contrast between vegetation and background. U-Net provides superior delineation of fine structures and improved segmentation coherence. However, RF exhibits advantages in computational efficiency, training speed, and robustness, confirming its suitability as a lightweight baseline model. These findings highlight the trade-offs between accuracy and efficiency and support the application of advanced computer vision models for ecological monitoring and desert vegetation analysis.

PDF
DOI

References

Kuandikova, G., & Mamataliev, A. (2025). Desert Tree Cover Evaluation [Software]. GitHub repository.

Hua, S., Yang, B., Zhang, X., Qi, J., Su, F., Sun, J., & Ruan, Y. (2025). GDPGO-SAM: An unsupervised fine segmentation of desert vegetation driven by Grounding DINO prompt generation and optimization Segment Anything Model. Remote Sensing, 17(4), 691. https://doi.org/10.3390/rs17040691

Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324

Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In MICCAI 2015. https://doi.org/10.48550/arXiv.1505.04597

Tucker, C. J. (1979). Red and photographic infrared linear combinations for monitoring vegetation. Remote Sensing of Environment, 8(2), 127–150. https://doi.org/10.1016/0034-4257(79)90013-0

Zhang, C., & Kovács, J. M. (2012). The application of small UAVs for precision agriculture: A review. Precision Agriculture, 13, 693–712.

Ma, L., et al. (2019). Deep learning in remote sensing applications: A meta-analysis and review. ISPRS Journal of Photogrammetry and Remote Sensing, 152, 166–177. https://doi.org/10.1016/j.isprsjprs.2019.04.015

Ham, J., Chen, Y., Crawford, M. M., & Ghosh, J. (2005). Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(3), 492–501. https://doi.org/10.1109/TGRS.2004.842481

Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3431–3440).

Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017). DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848. https://doi.org/10.48550/arXiv.1411.4038

Dosovitskiy, A., et al. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR. https://doi.org/10.48550/arXiv.2010.11929

Phan, T. N., Kuch, V., & Lehnert, L. (2020). Land cover classification using Google Earth Engine and Random Forest classifier: A case study. Remote Sensing, 12(15), 2411. https://doi.org/10.3390/rs12152411

Torres-Sánchez, J., Peña, J. M., de Castro, A. I., & López-Granados, F. (2014). Multi-temporal mapping of vegetation using UAV imagery and object-based image analysis. Precision Agriculture, 15(6), 1–17.

Kamilaris, A., & Prenafeta-Boldú, F. X. (2018). Deep learning in agriculture: A survey. Computers and Electronics in Agriculture, 147, 70–90. https://doi.org/10.1016/j.compag.2018.02.016

Gorelick, N., et al. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, 18–27. https://doi.org/10.1016/j.rse.2017.06.031

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.