Facial Micro-Expression (FME) Workshop 2026

Pushing Boundaries in Temporal and Spatial Subtle Movement Analysis

This workshop focuses on advancing the study of facial micro-expressions (FME) in computational analysis, spanning across interdisciplinary fields and incorporating the latest techniques in machine learning, multimodal analysis, and more.

Intro

Facial micro-expressions (MEs) are brief, involuntary facial movements that occur when individuals experience emotions they attempt to suppress, often in high-stakes scenarios. With durations typically below 500 ms, MEs provide unique cues for hidden or subtle affect, yet remain difficult to spot and recognize computationally due to scarce labeled data, annotation ambiguity, and the need for fine-grained spatiotemporal modeling.

This workshop aims to bring together researchers across computer vision, multimedia, and affective computing to advance temporal and spatial subtle movement analysis. We particularly encourage work that leverages modern learning paradigms (e.g., self-supervised learning, multimodal learning, and foundation models) while addressing challenges in data, evaluation, and robustness.

Focus

  • Psychological mechanisms and applications of inferring mental states from subtle facial cues
  • Micro-expression spotting & recognition under limited supervision and real-world variability
  • Fine-grained spatiotemporal modeling of subtle facial motion in videos
  • Learning with scarce / noisy / inconsistent labels and annotation uncertainty
  • Multimodal ME analysis beyond RGB (e.g., NIR/IR, depth, audio, physiological signals)
  • Foundation models for subtle affect (VLMs / multimodal LLMs), including prompt-based and zero-shot setups
  • Datasets, benchmarks, and protocols that improve reproducibility and fair comparison

Call for Papers

Topics of Interest:

We invite original, unpublished submissions (including position papers and challenge/benchmark papers) on topics including but not limited to:

- Psychological Mechanisms of Emotion and Deception

  • Cognitive load and emotional arousal associated with concealed emotions (e.g., lying) and their behavioral manifestations
  • The role of nonverbal leakage and micro-expressions in detecting deception
  • Cross-cultural variations in the psychological interpretation of ambiguous facial cues

- Micro-Expression Analysis

  • Micro-expression spotting, temporal localization, and onset/offset detection
  • Micro-expression recognition, intensity estimation, and fine-grained affect modeling
  • Motion representation learning (optical flow, strain, subtle dynamics, action units)
  • ME vs. macro-expression / subtle expression discrimination
  • Cross-dataset, cross-subject, and cross-cultural generalization
  • Robustness to head pose, illumination, occlusion, compression, and low-quality video

- Learning with Limited Supervision

  • Self-supervised / unsupervised learning for ME spotting and recognition
  • Weakly supervised, semi-supervised, and active learning for ME analysis
  • Few-shot, zero-shot, open-set, and continual learning for subtle expressions
  • Learning from noisy labels, annotator disagreement, and label uncertainty modeling
  • Domain adaptation, test-time adaptation, and personalization

- Multimodal, Multi-View, and 3D

  • Multimodal fusion: RGB + NIR/IR/thermal/depth/event cameras
  • Physiological signals for affect inference (e.g., ECG/PPG heart rate, EEG, EMG)
  • Multi-view facial analysis and 3D/4D face modeling for subtle motion
  • Temporal synchronization and alignment across modalities
  • Multimodal benchmarks and evaluation protocols

- Foundation Models and Vision-Language Approaches

  • Vision-language models (VLMs) for expression understanding and ME reasoning
  • Multimodal LLMs for ME analysis with natural-language interaction
  • Prompt engineering, instruction tuning, and structured prompting for subtle cues
  • Fine-tuning / adapter-based tuning on facial expression corpora for ME sensitivity
  • Zero-shot / in-context learning for ME spotting and recognition
  • Model interpretability: ensuring genuine ME understanding vs. superficial correlations

- Data, Benchmarks, and Reproducibility

  • New spontaneous ME datasets, annotations, and collection protocols
  • Benchmark challenges: metrics, splits, standardized evaluation, reproducible baselines
  • Synthetic data, simulation, augmentation, and data-centric ME learning
  • Bias, fairness, and demographic robustness analysis
  • Uncertainty-aware evaluation and confidence calibration

- Applications and Responsible Use

  • Affective computing, social signal processing, and human-centered AI
  • Mental state and wellbeing analysis (with appropriate ethical safeguards)
  • Human–computer interaction and assistive technologies
  • Ethical considerations, privacy, and responsible deployment of ME technologies

Note: The detailed workshop agenda will be announced after paper decisions and final scheduling.

Important Dates

  • Paper Submission Open: February 16, 2026
  • Paper Deadline: March 16, 2026
  • Paper Notification: April 13, 2026
  • Camera Ready: April 21, 2026 (aligned with the FG conference)
  • Workshop Date: May 25 or 29, 2026 (depending on the schedule)

Submission

Paper Submission Procedure

Paper submissions will follow the FG 2026 formatting and length requirements. We will use the same submission system as FG 2026:

FME workshop papers will follow the long paper format of FG 2026, consisting of 8 pages plus references. The submission should include substantive new research techniques, findings, and applications. Accepted workshop papers will be included in the FG 2026 workshop proceedings, subject to the conference publication policy.

Paper Review Procedure

We will adopt a double-blind review process for regular workshop papers:

  • Each submission will receive at least two reviews from the members of the programme committee or external experts.
  • Reviewers will be selected based on their expertise in micro-expression (ME), facial analysis, affective computing, multimodal learning, and related areas.
  • Acceptance decisions will be based on originality, technical quality, clarity, and relevance to micro-expression analysis and subtle emotion understanding.