Séminaires RALI-OLST-ILFC

Les mercredis à 11 h 30 (heure de Montréal), nous tenons un séminaire d'une heure portant sur un sujet du traitement des langues ou de linguistique. Il est typiquement offert en mode hybride (présentiel & vidéoconférence). Une fois par mois, le séminaire est organisé par le groupe de recherche français Linguistique Informatique, Formelle et de Terrain.

Detection of Persuasion Techniques in Memes

Shamanth Nayak (nayak (point) shamanth2000 <at> gmail (point) com)

Computer Science & Software Engineering, Concordia University

Le mercredi 13 novembre 2024 à 11 h 30

Réunion Zoom, ci-dessous — En présentiel, avec diffusion simultanée sur Zoom


Memes, which are user-generated content in the form of images and text, have become a powerful medium for shaping public discourse. Given their increasing influence, detecting persuasive techniques embedded within these multimodal forms of communication is crucial for identifying propaganda and combating online disinformation. Persuasion techniques in memes often combine rhetorical elements from both text and image, creating unique challenges for computational models.This thesis seeks to determine the impact of multimodal integration on the detection of persuasion techniques in memes and to evaluate how well multimodal models perform compared to single-modality models in this classification task. To achieve this, we developed and fine-tuned several models for text-based and multimodal persuasion detection using both pre-trained language models (BERT, XLM-RoBERTa, mBERT) and image-based models (CLIP, ResNET, VisualBERT).A key contribution of this work is the implementation of paraphrase-based data augmentation, which helped address class imbalance and improved the performance of text-only models. For multimodal approaches, we explored both early fusion and cross-modal alignment strategies. Surprisingly, cross-modal alignment underperformed, likely due to challenges in aligning abstract textual and visual cues. In contrast, the early fusion approach of combining text and image embeddings showed the highest performance, significantly outperforming text-only and image-only models. We also conducted zero-shot experiments with GPT-4 to benchmark its effectiveness in multimodal persuasion detection. Although GPT-4 demonstrated potential in zero-shot settings, the fine-tuned models still outperformed it, particularly when leveraging multimodal integration. This research advances the understanding of multimodal learning for detecting persuasion techniques, with broader implications for disinformation detection in online content.

Enregistrement : https://umontreal.ca.panopto.com/Panopto/Pages/Viewer.aspx?id=9f1b2254-1aba-4632-8e28-b227013a8838



Joignez-vous à nous avec Zoom à l'aide de cette URL.
ID de réunion : 916 9097 5818, Code : 343273.
Numéros de téléphone : https://umontreal.zoom.us/u/abitNZzLg.
One tap mobile : +14388097799,,91690975818#,,,,,,0#,,343273#


Suivez ce lien pour vous inscrire à la liste de diffusion RALI-OLST.
http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste de tous les séminaires pour l'année :

1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025