Abstract
Deepfake voice refers to artificially generated or manipulated audio that mimics a person's voice, often created using advanced AI techniques. These synthetic voices can be used to convincingly imitate someone, making them nearly indistinguishable from genuine recordings. We present an advanced method for deepfake voice detection, leveraging a custom model named MFCC-GNB XtractNet. By extracting Mel-Frequency Cepstral Coefficients (MFCC) from audio samples which serve as the foundational features for identifying genuine and fake voices. These MFCC features are then enhanced through a transformation process that employs a Gaussian Naive Bayes (GNB) model in conjunction with Non-Negative Factorization, creating a more discriminative feature set for subsequent analysis. These features are fed to our developed model, MFCC-GNB XtractNet to identify deep fake voice.To rigorously evaluate the effectiveness of our approach, we deployed a range of machine learning models, including Random Forest (RF), K-Nearest Neighbors Classifier (KNC), Logistic Regression (LR) and Gaussian Naive Bayes (GNB). Each model's performance is assessed through k-fold cross-validation, ensuring a robust evaluation across multiple data splits. Additionally, we performed a computational cost analysis to measure the efficiency of the models in terms of training time and resource usage. The results of our experiments were highly promising, with our MFCC-GNB XtractNet + GNB model achieving an impressive accuracy score of 99.93%. This exceptional performance underscores the model's ability to effectively distinguish between real and deepfake voices setting a new benchmark in the field of voice authentication.
Original language | English |
---|---|
Pages (from-to) | 197442-197453 |
Number of pages | 12 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 23 Dec 2024 |
Keywords
- Deep fake voice
- MFCC-GNB XtractNet
- machine learning
- transfer learning