We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Cross-modal generative models for multi-modal plastic sorting
Summary
Researchers created a new multi-sensor database of plastic spectral data and developed an AI method called Spectral Conversion Autoencoders (SCAE) that generates synthetic data from one sensor type to compensate for missing data from another. This approach improved automated plastic sorting accuracy from 93.3% to 97%, potentially enabling smarter and cheaper plastic recycling systems that need only one sensor instead of many.
Automated sorting through chemometric analysis of plastic spectral data could be a key strategy towards improving plastic waste management. Deep learning is a promising chemometric tool, but further development through multi-modal deep learning has been limited by lack of data availability. A new Multi-modal Plastic Spectral Database (MMPSD) consisting of Fourier Transform Infrared (FTIR), Raman and Laser-induced Breakdown Spectroscopy (LIBS) data for each sample in the database is introduced in this work. MMPSD serves as the basis for novel cross-modality generative model technique termed Spectral Conversion Autoencoders (SCAE), which generates synthetic data from data of another modality. SCAE is advantageous over traditional generative models like Variational Autoencoders (VAE), as it can generate class specific synthetic data without the need to train multiple models for each data class. MMPSD also facilitated the exploration of multi-modal deep learning, which improved the classification accuracy as compared to an uni-modal approach from 0.933 to 0.970. SCAE can further be combined with multi-modal methods to achieve a higher accuracy of 0.963 while still using a single sensor to reduce costs, which can be applied for multi-modal augmentation from FTIR sensors used in industrial sorting.
Sign in to start a discussion.