Classification of Crops through Self-Supervised Decomposition for Transfer Learning

Authors

  • J. Jayanth Department of Electronics and Communication Engineering, GSSS Institute of Engineering & Technology for Women, Mysore-570016, Karnataka, India
  • H. K. Ravikiran Department of Electronics and Communication Engineering, Navkis College of Engineering, Hassan-573217, Karnataka, India
  • K. M. Madhu Department of Civil Engineering, Rajeev Institute of Technology, Hassan-573201, Karnataka, India

DOI:

https://doi.org/10.25081/jaa.2023.v9.8566

Keywords:

Self-Supervised (2S) Transfer Learning, High-Resolution Image Classification, Spectral Features, TVSM, 3DCAE, GAN Model

Abstract

The 2S-DT (Self-Supervised Decomposition for Transfer Learning) model, created for crop categorization using remotely sensed data, is a unique method introduced in this paper. It deals with the difficulty of incorrectly identifying crops with comparable phenology patterns, a problem that frequently arises in agricultural remote sensing. Two datasets from Nanajangudu taluk in the Mysore district, which has a widely varied irrigated agriculture system, are used to assess the model. Using self-supervised learning, the 2S-DT model addresses the misclassification issue that frequently occurs when working with unlabeled classes, especially in high-resolution images. It uses class decomposition (CD) layer and a downstream learning approach. Using the model’s learning and the particulars of each geographical context, this layer improves the information’s arrangement. Our model architecture’s foundation is ResNet, a well-known deep learning framework. Each residual block in our ResNet architecture is made up of two 3x3 convolutional layers. Each convolutional layer is followed by batch normalization and Rectified Linear Unit (ReLU) activation functions, which improve the model’s capacity for learning. We utilized a 7x7 convolutional layer with 64 filters and a stride of 2 for Conv1 in ResNet18, resulting in an output size of 112x112x64. Conv2, which consists of Res2a and Res2b, generated an output with the dimensions 48x48x64. Conv3, which included Res3a and Res3b, produced an output with the dimensions 28x28x128. These architectural selections were made with our experimental needs in mind. The 2S-DT model’s newly added features make it easier to identify classes and update weights, improving the stability of the features’ spatial and spectral data. Extensive tests performed on two datasets show the model’s viability. Overall accuracy has improved significantly, with the 2S-DT model surpassing comparable models like TVSM, 3DCAE, and GAN Model by obtaining 95.65% accuracy for dataset 1 and 88.91% accuracy for dataset 2.

Downloads

Download data is not yet available.

References

Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Research frontier: deep machine learning - A new frontier in artificial intelligence research. IEEE Computational Intelligence Magazine, 5(4), 13-18. https://doi.org/10.1109/MCI.2010.938364

Bolton, D. K., & Friedl, M. A. (2013). Forecasting crop yield using remotely sensed vegetation indices and crop phenology metrics. Agricultural and Forest Meteorology, 173, 74-84. https://doi.org/10.1016/j.agrformet.2013.01.007

Bruzzone, L., Chi, M., & Marconcini, M. (2005, July 29). Transductive SVMs for semisupervised classification of hyperspectral data. Proceedings. 2005 IEEE International Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. (pp. 4). IEEE. https://doi.org/10.1109/IGARSS.2005.1526130

Esch, T., Metz, A., Marconcini, M., & Keil, M. (2014). Combined use of multi-seasonal high and medium resolution satellite imagery for parcel-related mapping of cropland and grassland. International Journal of Applied Earth Observation and Geoinformation, 28, 230-237. https://doi.org/10.1016/j.jag.2013.12.007

Gallego, J., Kravchenko, A. N., Kussul, N. N., Skakun, S. V., Shelestov, A. Y., & Grypych, Y. A. (2012). Efficiency assessment of different approaches to crop classification based on satellite and ground observations. Journal of Automation and Information Sciences, 44(5), 67-80. https://doi.org/10.1615/JAutomatInfScien.v44.i5.70

Gao, F., Anderson, M. C., Zhang, X., Yang, Z., Alfieri, J. G., Kustas, W. P., Mueller, R., Johnson, D. M., & Prueger, J. H. (2017). Toward mapping crop progress at field scales through fusion of Landsat and MODIS imagery. Remote Sensing of Environment, 188, 9-25. https://doi.org/10.1016/j.rse.2016.11.004

Hedhli, I., Moser, G., Zerubia, J., & Serpico, S. B. (2016). A new cascade model for the hierarchical joint classification of multitemporal and multiresolution remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 54(11), 6333-6348. https://doi.org/10.1109/TGRS.2016.2580321

Jayanth, J., Shalini, V. S., Kumar, T. A., & Koliwad, S. (2020). Classification of field-level crop types with a time series satellite data using deep neural network. In D. Hemanth (Eds.), Artificial Intelligence Techniques for Satellite Image Analysis (Vol. 24, pp. 49-67) Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-24178-0_3

Kussul, N., Lavreniuk, M., Skakun, S., & Shelestov, A. (2017). Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 14(5), 778-782. https://doi.org/10.1109/LGRS.2017.2681128

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444. https://doi.org/10.1038/nature14539

Li, Y., Zhang, H., & Shen, Q. (2017). Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sensing, 9(1), 67. https://doi.org/10.3390/rs9010067

Löw, F., Michel, U., Dech, S., & Conrad, C. (2013). Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using support vector machines. ISPRS Journal of Photogrammetry and Remote Sensing, 85, 102-119. https://doi.org/10.1016/j.isprsjprs.2013.08.007

Mathur, A., & Foody, G. M. (2008). Crop classification by support vector machine with intelligently selected training data for an operational application. International Journal of Remote Sensing, 29(8), 2227-2240. https://doi.org/10.1080/01431160701395203

Mei, S., Ji, J., Geng, Y., Zhang, Z., Li, X., & Du, Q. (2019). Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Transactions on Geoscience and Remote Sensing, 57(9), 6808-6820. https://doi.org/10.1109/TGRS.2019.2908756

Omkar, S. N., Senthilnath, J., Mudigere, D., & Kumar, M. M. (2008). Crop classification using biologically-inspired techniques with high resolution satellite image. Journal of the Indian Society of Remote Sensing, 36, 175-182. https://doi.org/10.1007/s12524-008-0018-y

Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint. arXiv:1511.06434. https://doi.org/10.48550/arXiv.1511.06434

Published

17-10-2023

How to Cite

Jayanth, J., Ravikiran, H. K., & Madhu, K. M. (2023). Classification of Crops through Self-Supervised Decomposition for Transfer Learning. Journal of Aridland Agriculture, 9, 81–91. https://doi.org/10.25081/jaa.2023.v9.8566

Issue

Section

Articles