Main

id

3018602

processing priority

3

site type

5 (wiki-type site, growing by topic rather than chronologically)

review version

11

html import

20 (imported)

Events

first seen date

2024-03-15 14:15:31

expired found date

-

created at

2024-05-28 14:55:26

updated at

2025-12-24 04:42:58

Domain name statistics

length

20

crc

58957

tld

86

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

87719371 (github.io)

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

0

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

-

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

31058

mp size raw text

10216

mp inner links count

3

mp inner links status

20 (imported)

Open Graph

title

description

Owain Evans is an AI Alignment researcher leading a new research group in Berkeley and affiliated with Oxford University. Discover his publications, blog posts, and collaborative opportunities on AI a

image

site name

author

updated

2025-12-06 14:34:20

raw text

Owain Evans, AI Alignment researcher Blog posts Papers Video and slides Past Mentees Collaborators Owain Evans Research Lead (new AI Safety group in Berkeley) Research Associate, Oxford University New papers (September 2023): The Reversal Curse: LLMs trained on “A is B” fail to learn "B is A" . ( Tweets , blog ) How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions . ( Tweets , blog ) Taken out of context: On measuring situational awareness in LLMs ( Tweets , blog ) About Me I have a broad interest in AI alignment and AGI risk. My current focus is evaluating situational awareness and deception in LLMs, and on truthfulness and honesty in AI systems. I am leading a new research group based in Berkeley. In the past, I worked full-time on AI Alignment at the University of Oxford (FHI) and earned my PhD at MIT. I also worked at Ought , where I still serve on the Board of Directors. I post regul...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

Zastosowania AI (149)

index version

2025110801

spam phrases

0

Text statistics

text nonlatin

0

text cyrillic

0

text characters

7556

text words

1410

text unique words

588

text lines

286

text sentences

59

text paragraphs

8

text words per sentence

23

text matched phrases

25

text matched dictionaries

5

RSS

rss path

rss status

3 (priority 3 already searched, no matches found)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap path

sitemap status

2 (priority 2 already searched, no matches found)

sitemap review version

1

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

-

sitemap process date

-

sitemap first import date

-

sitemap last import date

-