Main

processing priority

4

site type

3 (personal blog or private political site, e.g. Blogspot, Substack, also small blogs on own domains)

review version

11

html import

20 (imported)

Events

first seen date

2024-04-23 12:15:13

expired found date

-

created at

2024-06-26 06:58:54

updated at

2026-01-07 18:46:35

Domain name statistics

length

33

crc

42748

tld

2211

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

145219781 (medium.com)

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

0

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

2025-06-11 11:58:03

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

174769

mp size raw text

4900

mp inner links count

11

mp inner links status

20 (imported)

Open Graph

title

description

image

site name

author

updated

2026-01-01 19:23:09

raw text

DeepMind Safety Research – Medium Open in app Sign up Sign in Write Sign up Sign in DeepMind Safety Research 2.7K Followers Home About Oct 7, 2022 Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals By Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. For more details, check out our paper. As we build increasingly advanced AI systems, we want to make sure they don’t pursue undesired goals. This is the primary concern of the AI alignment community. … 9 min read 9 min read Aug 25, 2022 Discovering when an agent is present in a system New, formal definition of agency gives clear principles for causal modelling of AI agents and the incentives they face. Crossposted to our Deepmind blog. See also our extended blogpost on LessWrong/Alignment Forum. We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its… Artifi...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

AI [en] (229)

index version

2025123101

spam phrases

0

Text statistics

text nonlatin

0

text cyrillic

0

text characters

3815

text words

762

text unique words

357

text lines

89

text sentences

28

text paragraphs

11

text words per sentence

27

text matched phrases

6

text matched dictionaries

2

RSS

rss path

rss status

14 (same as 13, but no working feed was previously found, Cloudflare/parking detected from the start)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap path

sitemap status

1 (priority 1 already searched, no matches found)

sitemap review version

1

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

-

sitemap process date

-

sitemap first import date

-

sitemap last import date

-