GaitCrafter: Diffusion Model for Biometric Preserving Gait Synthesis

Source: arXiv AI Papers

Gait recognition is a biometric technique that identifies individuals based on their walking patterns, but its effectiveness is limited by the lack of extensive labeled datasets and the challenge of collecting diverse gait samples while preserving privacy. GaitCrafter proposes a novel solution by using a video diffusion model trained solely on gait silhouette data to generate temporally consistent and identity-preserving gait sequences. This approach differs from previous methods that relied on simulated environments or other generative models, offering more realistic and controllable gait synthesis. The model allows conditioning on various factors such as clothing, carried objects, and viewing angles, enabling the generation of diverse gait sequences under different covariates. Incorporating these synthetic samples into gait recognition pipelines enhances performance, particularly in challenging scenarios where real data is limited or varied.

Furthermore, GaitCrafter introduces a mechanism to create novel synthetic identities by interpolating identity embeddings, producing unique and consistent gait patterns not present in the original dataset. This capability supports training gait recognition models with augmented data while maintaining the privacy of real individuals, addressing ethical and legal concerns. The framework’s controllability and high-quality output mark a significant advancement in biometric data generation, potentially facilitating more robust and privacy-aware gait recognition systems. Future applications could include expanding dataset diversity, improving recognition accuracy, and enabling privacy-preserving biometric research and deployment.

👉 Pročitaj original: arXiv AI Papers