id
processing priority
3
site type
5 (wiki-type site, growing by topic rather than chronologically)
review version
11
html import
0 (new)
first seen date
2026-02-18 16:42:56
expired found date
-
created at
2025-12-12 09:17:46
updated at
2026-02-18 16:42:56
length
22
crc
45133
tld
86
nm parts
3
nm random digits
0
nm rare letters
0
is subdomain of id
87719371 (github.io)
previous id
0
replaced with id
0
related id
-
dns primary id
0
dns alternative id
0
lifecycle status
0 (unclassified, or currently active)
deleted subdomains
0
page imported products
0
page imported random
0
page imported parking
0
count skipped due to recent timeouts on the same server IP
0
count content received but rejected due to 11-799
0
count dns errors
0
count cert errors
0
count timeouts
0
count http 429
0
count http 404
0
count http 403
0
count http 5xx
0
next operation date
-
server bits
GITHUB COM
server ip
mp import status
20
mp rejected date
-
mp saved date
2026-02-18 16:42:56
mp size orig
20302
mp size raw text
3417
mp inner links count
7
mp inner links status
10 (links queued, awaiting import)
title
description
A simple, whitespace theme for academics. Based on [*folio](https://github.com/bogoli/-folio) design.
image
site name
author
Songlin Yang
updated
2026-03-02 08:17:30
raw text
Songlin Yang Toggle navigation about (current) blog publications talks cv Songlin Yang Songlin (松琳) is a Member of Technical Staff at Thinking Machines Lab , working on language model architectures. She earned her PhD from MIT, where she was advised by Prof. Yoon Kim . Flash Linear Attention efficient attention implementations in Triton FLA Discord community for Flash Linear Attention ASAP Seminar Advances in Sequence Modeling from Algorithmic Perspectives latest posts Dec 3, 2024 DeltaNet Explained (Part III) Dec 3, 2024 DeltaNet Explained (Part II) Dec 3, 2024 DeltaNet Explained (Part I) selected publications ICML Gated Linear Attention Transformers with Hardware-Efficient Training Songlin Yang*, Bailin Wang* , Yikang Shen , Rameswar Panda , and Yoon Kim In , 2024 Abs HTML Code Poster Transformers with linear attention allow for efficient parallel training but can simultaneously be formulated as an RNN with 2D ...
redirect type
0 (-)
block type
0 (no issues)
detected language
1 (English)
category id
-
index version
1
spam phrases
0
text nonlatin
2
text cyrillic
0
text characters
2679
text words
461
text unique words
267
text lines
90
text sentences
16
text paragraphs
2
text words per sentence
28
text matched phrases
0
text matched dictionaries
0
links self subdomains
0
links other subdomains
links other domains
8 - unpkg.com, semanticscholar.org, thinkingmachines.ai, jankautz.com, yzhang.site, jekyllrb.com, unsplash.com
links spam adult
0
links spam random
0
links spam expired
0
links ext activities
11
links ext ecommerce
0
links ext finance
0
links ext crypto
0
links ext booking
0
links ext news
0
links ext leaks
0
links ext ugc
23 - linkedin.com, twitter.com, discord.gg, youtube.com
links ext klim
0
links ext generic
2
dol status
0
dol updated
2026-03-02 08:17:30
rss path
rss status
0 (new)
rss found date
-
rss size orig
0
rss items
0
rss spam phrases
0
rss detected language
0 (awaiting analysis)
inbefore feed id
-
inbefore status
0 (new)
sitemap path
sitemap status
0 (new)
sitemap review version
0
sitemap urls count
0
sitemap urls adult
0
sitemap filtered products
0
sitemap filtered videos
0
sitemap found date
-
sitemap process date
-
sitemap first import date
-
sitemap last import date
-