Main

related bits

0

processing priority

3

site type

0 (generic, awaiting analysis)

review version

11

html import

20 (imported)

Events

first seen date

2025-06-02 17:49:15

expired found date

-

created at

2025-06-02 17:49:14

updated at

2025-06-02 17:49:19

Domain name statistics

length

18

crc

63004

tld

86

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

87719371 (github.io)

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

0

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

-

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

6091

mp size raw text

3887

mp inner links count

0

mp inner links status

1 (no links)

Open Graph

title

description

image

site name

author

updated

2026-02-18 23:54:06

raw text

SEAL-lvu SEAL: SEmantic Attention Learning for Long Video Representation Lan Wang 1,2 Yujia Chen 2 Du Tran 2 Vishnu Boddeti 1 Wen-Sheng Chu 2 1 Michigan State University &nbsp 2 Google CVPR 2025 (Oral: 0.7% acceptance ratio) [Paper]      [ArXiv]      [Code (coming soon)]      Long Video Representation with Semantic Attention Learning (SEAL): Conventional uniform sampling results in redundant and cluttered visual information, making it difficult for both AI models and human brains to process efficiently. Decomposing long videos into semantic entities such as scenes, objects, and actions reduces temporal redundancy, thus making model training and inference more efficient. In this example, the long video 𝒱 is decomposed into 4 scene tokens (S1--S4), 6 object tokens (O1--O6), and 4 action/event tokens (A1--A4). A query-aware attention learning module improves downstream task performance by focusing on relevant information rather ...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

Monarchia (130)

index version

1

spam phrases

0

Text statistics

text nonlatin

0

text cyrillic

0

text characters

3040

text words

530

text unique words

269

text lines

63

text sentences

22

text paragraphs

7

text words per sentence

24

text matched phrases

0

text matched dictionaries

0

RSS

rss path

rss status

1 (priority 1 already searched, no matches found)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap path

sitemap status

1 (priority 1 already searched, no matches found)

sitemap review version

0

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

-

sitemap process date

-

sitemap first import date

-

sitemap last import date

-