Main

related bits

0

processing priority

2

site type

0 (generic, awaiting analysis)

review version

11

html import

20 (imported)

Events

first seen date

2024-10-08 14:16:07

expired found date

-

created at

2024-10-08 14:16:07

updated at

2024-10-08 14:16:08

Domain name statistics

length

16

crc

41196

tld

756

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

15148699 (epfl.ch)

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

0

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

-

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

48103

mp size raw text

12552

mp inner links count

0

mp inner links status

1 (no links)

Open Graph

title

description

MultiMAE: Multi-modal Multi-task Masked Autoencoders

image

site name

author

updated

2026-03-02 16:04:42

raw text

MultiMAE | Multi-modal Multi-task Masked Autoencoders MultiMAE : Multi-modal Multi-task Masked Autoencoders Roman Bachmann * , David Mizrahi * , Andrei Atanov , Amir Zamir Swiss Federal Institute of Technology Lausanne (EPFL) * Equal contribution ECCV 2022 Paper Code Colab 🤗 Demo Poster Slides We introduce Multi-modal Multi-task Masked Autoencoders ( MultiMAE ), an efficient and effective pre-training strategy for Vision Transformers. Given a small random sample of visible patches from multiple modalities, the MultiMAE pre-training objective is to reconstruct the masked-out regions. Abstract We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders ( MultiMAE ). It differs from standard Masked Autoencoding in two key aspects: It can optionally accept additional modalities of information in the input beside RGB i...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

Zastosowania AI (149)

index version

1

spam phrases

0

Text statistics

text nonlatin

1

text cyrillic

0

text characters

8760

text words

1565

text unique words

510

text lines

321

text sentences

76

text paragraphs

25

text words per sentence

20

text matched phrases

0

text matched dictionaries

0

RSS

rss path

rss status

1 (priority 1 already searched, no matches found)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap path

sitemap status

1 (priority 1 already searched, no matches found)

sitemap review version

2

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

-

sitemap process date

-

sitemap first import date

-

sitemap last import date

-