id
related bits
0
processing priority
3
site type
0 (generic, awaiting analysis)
review version
11
html import
20 (imported)
first seen date
2024-11-06 23:44:15
expired found date
-
created at
2024-11-06 23:44:15
updated at
2026-03-10 06:19:14
length
26
crc
54945
tld
86
nm parts
0
nm random digits
0
nm rare letters
0
is subdomain of id
87719371 (github.io)
previous id
0
replaced with id
0
related id
-
dns primary id
0
dns alternative id
0
lifecycle status
0 (unclassified, or currently active)
deleted subdomains
0
page imported products
0
page imported random
0
page imported parking
0
count skipped due to recent timeouts on the same server IP
0
count content received but rejected due to 11-799
0
count dns errors
0
count cert errors
0
count timeouts
0
count http 429
0
count http 404
0
count http 403
0
count http 5xx
0
next operation date
-
server bits
—
server ip
-
mp import status
20
mp rejected date
-
mp saved date
-
mp size orig
17242
mp size raw text
8062
mp inner links count
1
mp inner links status
20 (imported)
title
Introduction
description
We are a collection of researchers interested in using causal models to understand agents and their incentives, in order to design safe and fair AI algorithms. If you are interested in collaborating o
image
site name
Causal Incentives Working Group
author
updated
2026-03-02 02:19:52
raw text
Introduction | Causal Incentives Working Group Causal Incentives Working Group We are a collection of researchers interested in using causal models to understand agents and their incentives, in order to design safe and fair AI algorithms. If you are interested in collaborating on any related problems, feel free to reach out to us. View My GitHub Profile Introduction For an accessible overview of of our work, see our blogpost sequence Towards Causal Foundations of Safe AGI . It builds on our UAI 2023 tutorial ( slides , video ). Papers Robust agents learn causal world models ( tweet summary , slides ) shows that a causal model is necessary for robust generalisation under distributional shifts. Jon Richens, Tom Everitt . ICLR, 2024 (Honorable mention outstanding paper award) The Reasons that Agents Act: Intention and Instrumental Goals ( tweet summary ): Formalises intent in causal models and connects it with a behavioural characterisation that can be appli...
redirect type
0 (-)
block type
0 (no issues)
detected language
1 (English)
category id
index version
1
spam phrases
0
text nonlatin
0
text cyrillic
0
text characters
6360
text words
1106
text unique words
481
text lines
172
text sentences
52
text paragraphs
14
text words per sentence
21
text matched phrases
0
text matched dictionaries
0
links self subdomains
0
links other subdomains
8 - cgi.cse.unsw.edu.au, conference.scipy.org, fhi.ox.ac.uk, cs.ox.ac.uk, global.oup.com, course.mlsafety.org
links other domains
16 - causalincentives.com, alignmentforum.org, slideslive.com, towardsdatascience.com, pgmpy.org, tomeveritt.se, safeandtrustedai.org, davidpreber.com, sbenthall.net, penguin.co.uk, aisafetyfundamentals.com
links spam adult
0
links spam random
0
links spam expired
0
links ext activities
24
links ext ecommerce
0
links ext finance
0
links ext crypto
0
links ext booking
0
links ext news
0
links ext leaks
0
links ext ugc
16 - youtube.com, twitter.com, deepmindsafetyresearch.medium.com, medium.com
links ext klim
0
links ext generic
2
dol status
0
dol updated
2026-03-02 02:19:52
rss path
rss status
1 (priority 1 already searched, no matches found)
rss found date
-
rss size orig
0
rss items
0
rss spam phrases
0
rss detected language
0 (awaiting analysis)
inbefore feed id
-
inbefore status
0 (new)
sitemap path
sitemap status
1 (priority 1 already searched, no matches found)
sitemap review version
2
sitemap urls count
0
sitemap urls adult
0
sitemap filtered products
0
sitemap filtered videos
0
sitemap found date
-
sitemap process date
-
sitemap first import date
-
sitemap last import date
-