Research Article
Predicting Depression in Women Using Deep Learning Techniques
Issue:
Volume 2, Issue 3, June 2026
Pages:
189-195
Received:
23 February 2026
Accepted:
3 March 2026
Published:
12 May 2026
DOI:
10.11648/j.scif.20260203.11
Downloads:
Views:
Abstract: Depression is a significant global health issue with a notably higher prevalence in women. However, many predictive models using artificial intelligence (AI) overlook gender-specific symptom patterns, limiting their sensitivity and effectiveness for female populations. This study addresses this gap by developing and evaluating a multimodal, gender-specific deep learning framework designed to predict depression exclusively in women. Leveraging the female subset of the Distress Analysis Interview Corpus (DAIC-WOZ) dataset, the study utilizes a late-fusion architecture that integrates four distinct data streams: textual transcripts, acoustic features, visual facial cues, and tabular clinical data (PHQ-8 scores). The model employs specialized neural network branches for each modality- a Transformer (DistilBERT) for text, a Bidirectional LSTM (BiLSTM) for audio, a Temporal CNN for visual sequences, and a Multi-Layer Perceptron (MLP) for tabular data before concatenating their embeddings for a final prediction. The results demonstrate the superior performance of the multimodal approach, achieving an F1-score of 089 and an ROC-AUC of 0.92, significantly outperforming unimodal baselines. Ablation studies revealed that textual data was the most influential modality, with its removal causing a performance degradation of over 15% in the F1-score. Acoustic features were identified as the second most critical predictor, underscoring the importance of both linguistic content and vocal prosody.
Abstract: Depression is a significant global health issue with a notably higher prevalence in women. However, many predictive models using artificial intelligence (AI) overlook gender-specific symptom patterns, limiting their sensitivity and effectiveness for female populations. This study addresses this gap by developing and evaluating a multimodal, gender-...
Show More