Skip to content

feat: implement rae autoencoder.#13046

Open
Ando233 wants to merge 45 commits intohuggingface:mainfrom
Ando233:rae
Open

feat: implement rae autoencoder.#13046
Ando233 wants to merge 45 commits intohuggingface:mainfrom
Ando233:rae

Conversation

@Ando233
Copy link

@Ando233 Ando233 commented Jan 28, 2026

What does this PR do?

This PR adds a new representation autoencoder implementation, AutoencoderRAE, to diffusers.
Implements diffusers.models.autoencoders.autoencoder_rae.AutoencoderRAE with a frozen pretrained vision encoder (DINOv2 / SigLIP2 / ViT-MAE) and a ViT-MAE style decoder.
The decoder implementation is aligned with the RAE-main GeneralDecoder parameter structure, enabling loading of existing trained decoder checkpoints (e.g. model.pt) without key mismatches when encoder/decoder settings are consistent.
Adds unit/integration tests under diffusers/tests/models/autoencoders/test_models_autoencoder_rae.py.
Registers exports so users can import directly via from diffusers import AutoencoderRAE.

Fixes #13000

Before submitting

Usage

ae = AutoencoderRAE(
    encoder_cls="dinov2",
    encoder_name_or_path=encoder_path,
    image_size=image_size,
    encoder_input_size=image_size,
    patch_size=patch_size,
    num_patches=num_patches,
    decoder_hidden_size=1152,
    decoder_num_hidden_layers=28,
    decoder_num_attention_heads=16,
    decoder_intermediate_size=4096,
).to(device)
ae.eval()

state = torch.load(args.decoder_ckpt, map_location="cpu")
ae.decoder.load_state_dict(state, strict=False)

with torch.no_grad():
    recon = ae(x).sample

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@sayakpaul sayakpaul requested a review from kashif January 30, 2026 11:31
@sayakpaul
Copy link
Member

@bytetriper if you could take a look?

@kashif
Copy link
Contributor

kashif commented Jan 30, 2026

nice works @Ando233 checking

@kashif
Copy link
Contributor

kashif commented Jan 30, 2026

off the bat,

  • let's have a nice convention for the output datatype classes, have a look at the other autoencoder for the convention in difusers
  • some of the tests might need to be marked as slow and some paths are hard-coded

lets sort out these things and then re-look

@bytetriper
Copy link

Agree with @kashif . Also if possible we can bake all the params into config so we can enable .from_pretrained(), which is more elegant and aligns with diffusers usage. I can help convert our released ckpt to hgf format afterwards

@sayakpaul
Copy link
Member

@Ando233 we're happy to provide assistance if needed.

@kashif
Copy link
Contributor

kashif commented Feb 15, 2026

@Ando233 the one remaining thing is the use of the use_encoder_loss and perhaps an example real-world training script

@kashif
Copy link
Contributor

kashif commented Feb 15, 2026

@bytetriper could you kindly try to run the conversion scripts and upload the diffusers style weights to your huggingface hub for the checkpoints you have?

@kashif
Copy link
Contributor

kashif commented Feb 23, 2026

@bytetriper i sent you some fixes to the weights if you can kindly merge

@bytetriper
Copy link

@kashif Merged!

@kashif kashif requested a review from sayakpaul February 26, 2026 12:22
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Let me know if this makes sense. @bytetriper it would be great if you could also test the diffusers counterparts of RAE and let us know your thoughts.

specific language governing permissions and limitations under the License.
-->

# AutoencoderRAE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stevhliu could you check out the docs?

@sayakpaul sayakpaul requested a review from stevhliu February 28, 2026 16:49
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a major comment regarding the presence of the encoder-specific classes now. LMK your thoughts.

Comment on lines +66 to +67
self.model.layernorm.weight = None
self.model.layernorm.bias = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're already stripping the layernorms in the conversion. Seems like it's not needed anymore?

Comment on lines +102 to +103
self.model.vision_model.post_layernorm.weight = None
self.model.vision_model.post_layernorm.bias = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for these?

logger = logging.get_logger(__name__)


class Dinov2Encoder(nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now, I am a bit confused.

The layernorm-related modifications seem to be the only stuff for which we require these separate encoder classes.

Now that we're doing the layernorm related modifications in the conversion script, do we still need them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we still have that since the weights on the hub have not been updated yet. Once they are updated, then yes, we could use the transformer models directly, but the different models have different forward logic, so we would still need a per-encoder forward dipatch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but the different models have different forward logic, so we would still need a per-encoder forward dipatch.

Yeah that is fine. We maintain standalone functions and dispatch accordingly. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

RAE support

5 participants