0
Article ? AI-assigned paper type based on the abstract. Classification may not be perfect — flag errors using the feedback button. Tier 2 ? Original research — experimental, observational, or case-control study. Direct primary evidence. Detection Methods Marine & Wildlife Sign in to save

Estimating precision and accuracy of automated video post-processing: A step towards implementation of AI/ML for optics-based fish sampling

Frontiers in Marine Science 2023 11 citations ? Citation count from OpenAlex, updated daily. May differ slightly from the publisher's own count. Score: 50 ? 0–100 AI score estimating relevance to the microplastics field. Papers below 30 are filtered from public browse.
Jack Prior, Matthew D. Campbell, Matthew Dawkins, Paul F. Mickle, Robert Moorhead, Simegnew Yihunie Alaba, Chiranjibi Shah, Joseph Salisbury, Kevin R. Rademacher, A. Paul Felts, Farron Wallace

Summary

Researchers developed automated computer vision models for identifying commercially important Gulf of Mexico fish species from video surveys, assessing precision and accuracy as a step toward replacing manual review with AI-based processing.

Increased necessity to monitor vital fish habitat has resulted in proliferation of camera-based observation methods and advancements in camera and processing technology. Automated image analysis through computer vision algorithms has emerged as a tool for fisheries to address big data needs, reduce human intervention, lower costs, and improve timeliness. Models have been developed in this study with the goal to implement such automated image analysis for commercially important Gulf of Mexico fish species and habitats. Further, this study proposes adapting comparative otolith aging methods and metrics for gauging model performance by comparing automated counts to validation set counts in addition to traditional metrics used to gauge AI/ML model performance (such as mean average precision - mAP). To evaluate model performance we calculated percent of stations matching ground-truthed counts, ratios of false-positive/negative detections, and coefficient of variation (CV) for each species over a range of filtered outputs using model generated confidence thresholds (CTs) for each detected and classified fish. Model performance generally improved with increased annotations per species, and false-positive detections were greatly reduced with a second iteration of model training. For all species and model combinations, false-positives were easily identified and removed by increasing the CT to classify more restrictively. Issues with occluded fish images and reduced performance were most prevalent for schooling species, whereas for other species lack of training data was likely limiting. For 23 of the examined species, only 7 achieved a CV less than 25%. Thus, for most species, improvements to the training library will be needed and next steps will include a queried learning approach to bring balance to the models and focus during training. Importantly, for select species such as Red Snapper ( Lutjanus campechanus ) current models are sufficiently precise to begin utilization to filter videos for automated, versus fully manual processing. The adaption of the otolith aging QA/QC process for this process is a first step towards giving researchers the ability to track model performance through time, thereby giving researchers who engage with the models, raw data, and derived products confidence in analyses and resultant management decisions.

Sign in to start a discussion.

More Papers Like This

Article Tier 2

Assessment of sustainable baits for passive fishing gears through automatic fish behavior recognition

Researchers developed biodegradable cockle-based fishing baits and used machine learning to automatically track and classify fish behavior from underwater video, finding that while the bio-baits attracted fewer fish than natural bait initially, they sustained fish interest longer. This work offers a lower-waste alternative to conventional fishing bait while advancing automated tools for monitoring fish behavior.

Article Tier 2

Data Study Group Final Report: Centre for Environment, Fisheries and Aquaculture Science

Machine learning was applied to the challenge of automatically classifying plankton species from underwater images collected by fisheries monitoring systems. The AI classifier could identify dozens of plankton categories with high accuracy, reducing the need for time-consuming manual identification. Automated plankton monitoring improves understanding of marine food web health and ecosystem responses to environmental change.

Article Tier 2

A Machine Learning Approach To Microplastic Detection And Quantification In Aquatic Environments

This study developed a machine learning approach for detecting and quantifying microplastics in aquatic environments, demonstrating that automated image analysis can improve throughput and accuracy compared to manual microscopic counting for environmental monitoring applications.

Article Tier 2

Improving YOLOv11 for marine water quality monitoring and pollution source identification

Researchers improved the YOLOv11 computer vision model to better detect and identify marine pollution sources, including oil spills, debris, and turbid water, in complex underwater environments. The enhanced model achieved higher detection accuracy and faster processing speeds compared to the standard version. The study demonstrates that advanced AI-based monitoring tools can meaningfully improve our ability to track and respond to marine pollution in real time.

Article Tier 2

Aquatic Trash Detection and Classification: a Machine Learning and Deep Learning Perspective

This review examines machine learning and deep learning approaches for detecting and classifying aquatic trash in waterways, evaluating how computer vision algorithms trained on underwater and surface imagery can automate pollution monitoring for faster, more scalable ocean cleanup.

Share this paper