📰 Publications

† : Equal Contribution

GraphCFC

GraphCFC: A directed graph based cross-modal feature complementation approach for multimodal conversational emotion recognition

Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng

GraphCFC effectively extracts contextual and interactive information multimodal converesation. By employing multiple subspace extractors and a pair-wise cross-modal complementary (PairCC) strategy, GraphCFC alleviates the heterogeneity gap in multimodal fusion and extracts diverse information from multimodal dialogue graphs. GAT-MLP mitigates the over-smoothing issue in GNNs and offers a new structure for multimodal learning. By representing conversations as multimodal directed graphs and encoding various types of edges extracted from these graphs, the GAT-MLP layer is capable of precisely selecting crucial contextual and interactive information.

Journal Papers

Conference Papers