
Multimodal Biometrics via Discriminant Correlation Anaylsis
Developed detectors that would extract, fuse, and train features from voice and face found in the MoBio dataset. Model was trained using traditional machine learning classifiers SVM, LDA, QDA, RF, and k-NN. In our teams reserach we used Python, DNN, Scikit-learn, Librosa, Open-Cv, Numpy, and Pandas. Overall, our pipeline used feature-fusion. The purpose of our research was to see how well multimodal biometrics works against a single modality biometric. Also comparing against past biometrics research.