0
Article ? AI-assigned paper type based on the abstract. Classification may not be perfect — flag errors using the feedback button. Tier 2 ? Original research — experimental, observational, or case-control study. Direct primary evidence. Detection Methods Environmental Sources Policy & Risk Sign in to save

UAV imaging and deep learning based method for predicting residual film in cotton field plough layer

Frontiers in Plant Science 2022 11 citations ? Citation count from OpenAlex, updated daily. May differ slightly from the publisher's own count. Score: 35 ? 0–100 AI score estimating relevance to the microplastics field. Papers below 30 are filtered from public browse.
Fasong Qiu, Zhiqiang Zhai, Yulin Li, Jiankang Yang, Haiyuan Wang, Ruoyu Zhang

Summary

Researchers developed a method combining UAV imaging with three deep learning frameworks (LinkNet, FCN, and DeepLabv3) to segment and predict residual plastic film content in the plough layer of cotton fields, offering a lower-cost and higher-efficiency alternative to traditional manual sampling for agricultural plastic pollution monitoring.

In this paper, a method for predicting residual film content in the cotton field plough layer based on UAV imaging and deep learning was proposed to solve the issues of high labour intensity, low efficiency, and high cost of traditional methods for residual film content monitoring. Images of residual film on soil surface in the cotton field were collected by UAV, and residual film content in the plough layer was obtained by manual sampling. Based on the three deep learning frameworks of LinkNet, FCN, and DeepLabv3, a model for segmenting residual film from the cotton field image was built. After comparing the segmentation results, DeepLabv3 was determined to be the best model for segmenting residual film, and then the area of residual film was obtained. In addition, a linear regression prediction model between the residual film coverage area on the cotton field surface and the residual film content in the plough layer was built. The results showed that the correlation coefficient (R2), root mean square error, and average relative error of the prediction of residual film content in the plough layer were 0.83, 0.48, and 11.06%, respectively. It indicates that a quick and accurate prediction of residual film content in the cotton field plough layer can be realized based on UAV imaging and deep learning. This study provides certain technical support for monitoring and evaluating residual film pollution in the cotton field plough layer.

Sign in to start a discussion.

More Papers Like This

Article Tier 2

Evaluation of residual plastic film pollution in pre-sowing cotton field using UAV imaging and semantic segmentation

Researchers proposed a UAV-based imaging method combined with a modified U-Net semantic segmentation model to evaluate residual plastic film pollution in pre-sowing cotton fields, collecting images from different heights under varying weather conditions to accurately map mulch film remnants.

Article Tier 2

Identification of Plastic Mulch in Cotton Fields Using UAV-Based Hyperspectral Data and Deep Learning Semantic Segmentation

Plastic mulch film is widely used in agriculture to improve crop yields, but residual plastic in fields contributes to soil microplastic contamination, and identifying where it remains after harvest is difficult at scale. This study used drone-mounted hyperspectral cameras combined with deep-learning image analysis to map plastic mulch coverage in cotton fields in China, achieving up to 80% accuracy in distinguishing plastic from soil and crop canopy. Accurate mapping of residual mulch is a critical first step toward targeted plastic removal and reducing the flow of agricultural microplastics into soil and water.

Article Tier 2

Application of hyperspectral and deep learning in farmland soil microplastic detection

Hyperspectral imaging combined with deep learning was applied to detect and classify microplastics in farmland soil, offering a non-destructive, rapid alternative to time-consuming chemical extraction methods. The model achieved high classification accuracy across polymer types, demonstrating the potential for field-deployable microplastic monitoring in agricultural settings.

Article Tier 2

A Deep Learning Model for Automatic Plastic Mapping Using Unmanned Aerial Vehicle (UAV) Data

Researchers applied a deep learning semantic segmentation model (ResUNet50 based on U-Net architecture) to UAV orthophotos to automatically map floating plastic debris, achieving F1-scores of 0.86-0.92 for specific plastic types including oriented polystyrene, nylon, and PET. Classification accuracy decreased with lower spatial resolution, with 4 mm resolution providing optimal performance for distinguishing plastic types.

Article Tier 2

Combining YOLOv7-SPD and DeeplabV3+ for Detection of Residual Film Remaining on Farmland

Researchers developed a hybrid computer vision method combining YOLOv7-SPD object detection and DeepLabV3+ image segmentation to identify and quantify plastic film residues left in farmland soil. The improved model achieved 93.72% average precision and 87.62% recall for detection, with image segmentation reaching 91.55% mean IoU, demonstrating strong potential for automating agricultural residue management.

Share this paper