Main

related bits

0

processing priority

3

site type

0 (generic, awaiting analysis)

review version

11

html import

20 (imported)

Events

first seen date

2024-09-18 05:33:05

expired found date

-

created at

2024-09-18 05:33:05

updated at

2026-01-16 17:10:58

Domain name statistics

length

8

crc

52584

tld

499

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

-

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

1

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

2025-09-02 14:56:15

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

16980

mp size raw text

1919

mp inner links count

0

mp inner links status

20 (imported)

Open Graph

title

Isabelle Lee

description

A highly-customizable Hugo academic resume theme powered by Wowchemy website builder.

site name

Isabelle Lee

author

Isabelle Lee

updated

2026-01-15 12:51:19

raw text

Isabelle Lee Isabelle Lee Isabelle Lee Publications Posts Contact Light Dark Automatic Isabelle Lee PhD Student University of Southern California I’m a 2nd year PhD student at USC, working with Dani Yogatama . I’m interested in interpretability - how we make sense of machine learning models, and how interpretability might uncover the underlying science of large-scale models. Right now, I’m exploring how large models learn during pretraining, particularly analogous to Emergence/Self-Organization , where component-level interactions lead to complex, system-wide patterns. I am specifically interested in pretraining , and broadly interested in training dynamics and methodologies. By understanding (pre-)training dynamics of LLMs, I hope to better parse and possibly develop methods to “debug” how LLMs acquire reasoning capabilities. In my past life, I used to study Physics and Complex Systems, so I borrow some approaches from that toolbox. Recent Publications...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

Other [en] (231)

index version

2025123101

spam phrases

0

Text statistics

text nonlatin

0

text cyrillic

0

text characters

1458

text words

264

text unique words

179

text lines

73

text sentences

15

text paragraphs

2

text words per sentence

17

text matched phrases

2

text matched dictionaries

2

RSS

rss path

rss status

14 (same as 13, but no working feed was previously found, Cloudflare/parking detected from the start)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap status

10 (sitemap found, awaiting processing)

sitemap review version

1

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

2025-07-10 19:25:59

sitemap process date

-

sitemap first import date

-

sitemap last import date

-