News

To overcome these limitations, we introduce AstroMAE, an innovative approach that pretrains a vision transformer encoder using a masked autoencoder method on ... subsequently fine-tuned within a ...
We employ Self-Supervised Learning and Masked Image Modeling techniques to tackle this task. Recognizing the challenges and costs associated with acquiring hyperspectral data, we aim to develop a ...
This is a slim implementation of the "DEMAE: Diffusion Enhanced Masked Autoencoder for Hyperspectral Image Classification With few Labeled Samples", which has been published at IEEE TGRS! And the ...
@inproceedings{yan2023skeletonmae, title={SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence Pre-training}, author={Yan, Hong and Liu, Yang and Wei, Yushen and Li, Guanbin and Lin, ...
The representation ability of the model is strongly correlated with the number of such high-quality labels. Recently, the masked autoencoder (MAE) has been shown to effectively pre-train Vision ...