读论文–Token Merging for Fast Stable Diffusion(用于快速Diffusion模型的tome技术)

摘要
The landscape of image generation has been forever changed by open vocabulary diffusion models.
However, at their core these models use transformers, which makes generation slow. Better implementations to increase the throughput of these transformers have emerged, but they still evaluate the entire model.
In this paper, we instead speed up diffusion models by exploiting natural redundancy in generated images by merging redundant tokens.
After making some diffusion-specific improvements to Token Merging (ToMe), our ToMe for Stable Diffusion can reduce the number of tokens in an existing Stable Diffusion model by up to 60% while still producing high quality images without any extra training.
In the process, we speed up image generation by up to 2× and reduce memory consumption by up to 5.6×.
Furthermore, this speed-up stacks with effi- cient implementations such as xFormers, mini

版权声明:本文为博主作者:计算机视觉-Archer原创文章,版权归属原作者,如果侵权,请联系我们删除!

原文链接:https://blog.csdn.net/zjc910997316/article/details/130315660

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
xiaoxingxing的头像xiaoxingxing管理团队
上一篇 2024年1月11日
下一篇 2024年1月11日

相关推荐