The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion

1UNC Chapel Hill 2University of Maryland 3Lenovo
* Equal contribution

Motivation: What if aging wasn't a straight line—but a tree?

Aging is not a single path but a multiverse of possibilities shaped by our choices and environments. From skincare to stress, diet to sunlight, every external factor nudges us down a different trajectory. The Aging Multiverse brings this idea to life by generating branching visualizations—aging trees—that show how you might age under different lifestyle conditions. Each branch represents a plausible future, empowering users to explore the visual consequences of health habits, environmental exposure, and more.

Abstract

We introduce the Aging Multiverse, a framework for generating multiple plausible facial aging trajectories from a single image, each conditioned on external factors such as environment, health, and lifestyle. Unlike prior methods that model aging as a single deterministic path, our approach creates an aging tree that visualizes diverse futures. To enable this, we propose a training-free diffusion-based method that balances identity preservation, age accuracy, and condition control. Our key contributions include attention mixing to modulate editing strength and a Simulated Aging Regularization strategy to stabilize edits. Extensive experiments and user studies demonstrate state-of-the-art performance across identity preservation, aging realism, and conditional alignment, outperforming existing editing and age-progression models, which often fail to account for one or more of the editing criteria. By transforming aging into a multi-dimensional, controllable, and interpretable process, our approach opens up new creative and practical avenues in digital storytelling, health education, and personalized visualization.

Video

Method

Overview of our training-free conditional-age progression framework. Given an input image and a textual description of external aging factors, our method leverages flow matching techniques to perform editing. Our approach balances three competing objectives—identity preservation, age accuracy, and condition alignment—enabling conditional age transformation without retraining. Our key innovations are: (i) attention mixing of Key and Value tensors between inversion and editing, and (ii) attention regularization with simulated unconditional aging to achieve the best inversion-editability trade-off.

Results

Celebrity

Source ( years old)
Target age:
Ours
RF-Solver-Edit
FlowEdit
FireFlow
FADING
(only aging effect)

Please try selecting different examples by clicking on the thumbnails.

Non-celebrity

Source ( years old)
Target age:
Ours
RF-Solver-Edit
FlowEdit
FireFlow
FADING
(only aging effect)

Please try selecting different examples by clicking on the thumbnails.

Acknowledgements

This research was supported in part by Lenovo Research (Morrisville, NC). We gratefully acknowledge the invaluable support and assistance of the members of the Mobile Technology Innovations Lab. This work was also supported in part by the National Science Foundation under Grant No.2213335.

BibTeX

@misc{gong2025agingmultiversegeneratingconditionaware,
      title={The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion}, 
      author={Bang Gong and Luchao Qi and Jiaye Wu and Zhicheng Fu and Chunbo Song and David W. Jacobs and John Nicholson and Roni Sengupta},
      year={2025},
      eprint={2506.21008},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.21008}, 
    }