0
Article ? AI-assigned paper type based on the abstract. Classification may not be perfect — flag errors using the feedback button. Tier 2 ? Original research — experimental, observational, or case-control study. Direct primary evidence. Sign in to save

Cross-modal generative models for multi-modal plastic sorting

Journal of Cleaner Production 2023 17 citations ? Citation count from OpenAlex, updated daily. May differ slightly from the publisher's own count. Score: 45 ? 0–100 AI score estimating relevance to the microplastics field. Papers below 30 are filtered from public browse.
Edward Ren Kai Neo, Edward Ren Kai Neo, Jonathan Sze Choong Low, Jonathan Sze Choong Low, Vannessa Goodship, Vannessa Goodship, Stuart R. Coles, Kurt Debattista Kurt Debattista

Summary

Researchers created a new multi-sensor database of plastic spectral data and developed an AI method called Spectral Conversion Autoencoders (SCAE) that generates synthetic data from one sensor type to compensate for missing data from another. This approach improved automated plastic sorting accuracy from 93.3% to 97%, potentially enabling smarter and cheaper plastic recycling systems that need only one sensor instead of many.

Automated sorting through chemometric analysis of plastic spectral data could be a key strategy towards improving plastic waste management. Deep learning is a promising chemometric tool, but further development through multi-modal deep learning has been limited by lack of data availability. A new Multi-modal Plastic Spectral Database (MMPSD) consisting of Fourier Transform Infrared (FTIR), Raman and Laser-induced Breakdown Spectroscopy (LIBS) data for each sample in the database is introduced in this work. MMPSD serves as the basis for novel cross-modality generative model technique termed Spectral Conversion Autoencoders (SCAE), which generates synthetic data from data of another modality. SCAE is advantageous over traditional generative models like Variational Autoencoders (VAE), as it can generate class specific synthetic data without the need to train multiple models for each data class. MMPSD also facilitated the exploration of multi-modal deep learning, which improved the classification accuracy as compared to an uni-modal approach from 0.933 to 0.970. SCAE can further be combined with multi-modal methods to achieve a higher accuracy of 0.963 while still using a single sensor to reduce costs, which can be applied for multi-modal augmentation from FTIR sensors used in industrial sorting.

Sign in to start a discussion.

Share this paper