Identity-Preserving Image-to-Video Generation
via Reward-Guided Optimization


Liao Shen1,2*, Wentao Jiang1*, Yiran Zhu1, Tiezheng Ge1†, Zhiguo Cao2†, Bo Zheng1

1Taobao & Tmall Group of Alibaba    2Huazhong University of Science and Technology

Abstract


Recent advances in image-to-video (I2V) generation have achieved remarkable progress in synthesizing high-quality, temporally coherent videos from static images. Among all the applications of I2V, human-centric video generation includes a large portion. However, existing I2V models encounter difficulties in maintaining identity consistency between the input human image and the generated video, especially when the person in the video exhibits significant expression changes and movements. This issue becomes critical when the human face occupies merely a small fraction of the image. Since humans are highly sensitive to identity variations, this poses a critical yet under-explored challenge in I2V generation. In this paper, we propose Identity-Preserving Reward-guided Optimization (IPRO), a novel video diffusion framework based on reinforcement learning to enhance identity preservation. Instead of introducing auxiliary modules or altering model architectures, our approach introduces a direct and effective tuning algorithm that optimizes diffusion models using a face identity scorer. To improve performance and accelerate convergence, our method backpropagates the reward signal through the last steps of the sampling chain, enabling richer gradient feedback. We also propose a novel facial scoring mechanism that treats faces in ground-truth videos as facial feature pools, providing multi-angle facial information to enhance generalization. A KL-divergence regularization is further incorporated to stabilize training and prevent overfitting to the reward signal.


Method


Overview of our method. (A) IPRO predicts \(\bar{x}_0\) from the noise input \(x_T\) , and the prediction is visualized through a frozen VAE decoder and scored by a face reward model with our facial scoring mechanism (C). This reward signal is used to update the trainable parts of the model, thereby steering the generation process to produce videos with consistent identity. (B) We further incorporate a KL-divergence regularization to alleviate reward hacking.


Experiments


Quantitative comparisons. Our method achieves more consistent face similarity than its baseline, without compromising its performance on other dimensions, including Subject Consistency (SC), Background Consistency (BC), Aesthetic Quality (AQ), Imaging Quality (IQ), Time Flickering (TF), Dynamic Degree (DD), and Motion Smoothness (MS).

Qualitative comparisons. Our method achieves superior identity preservation compared to its baseline.