Our paper "Histoires Morales: A French Dataset for Assessing Moral Alignment" is accepted to NAACL 2025
Link to the dataset : https://hf.co/datasets/LabHC/histoires_morales.
Link for the paper : https://arxiv.org/abs/2501.17117.
Code : https://github.com/upunaprosk/histoires-morales

Aligning language models with human values is crucial, especially as they become more integrated into everyday life. While models are often adapted to user preferences, it is equally important to ensure they align with moral norms and behaviors in real-world social situations. Despite significant progress in languages like English and Chinese, French has received little attention in this area, leaving a gap in understanding how LLMs handle moral reasoning in this language.
To address this gap, we introduce Histoires Morales, a French dataset derived from Moral Stories, created through translation and subsequently refined with the assistance of native speakers to guarantee grammatical accuracy and adaptation to the French cultural context. We also rely on annotations of the moral values within the dataset to ensure their alignment with French norms. Histoires Morales covers a wide range of social situations, including differences in tipping practices, expressions of honesty in relationships, and responsibilities toward animals.
To foster future research, we conduct preliminary experiments on the alignment of multilingual models with French and English data, as well as the robustness of this alignment. We find that while LLMs are generally aligned with human moral norms by default, they can be easily influenced by user-preference optimization for both moral and immoral data.
Contributions
(i) We introduce Histoires Morales, the first dataset of narratives describing moral behavior in French, which can be used alongside parallel English data for comparative analysis. We develop a translation pipeline with error-explanation prompts combined with manual annotations and human feedback to achieve high-quality translations.
(ii) We compare LLMs' moral alignment with human norms using sentence likelihood and classification of moral actions through declarative prompts.
(iii) We investigate the robustness of LLMs' multilingual moral alignment by testing their susceptibility to shifts favoring either moral or immoral actions using Direct Preference Optimization.
Our initial results indicate that LLMs align more strongly with moral norms in English than in French, with low robustness in this alignment. This finding highlights the need for further research in multilingual moral alignment.