Main

related bits

0

processing priority

3

site type

0 (generic, awaiting analysis)

review version

11

html import

20 (imported)

Events

first seen date

2024-11-06 23:44:15

expired found date

-

created at

2024-11-06 23:44:15

updated at

2026-03-10 06:19:14

Domain name statistics

length

26

crc

54945

tld

86

nm parts

0

nm random digits

0

nm rare letters

0

Connections

is subdomain of id

87719371 (github.io)

previous id

0

replaced with id

0

related id

-

dns primary id

0

dns alternative id

0

lifecycle status

0 (unclassified, or currently active)

Subdomains and pages

deleted subdomains

0

page imported products

0

page imported random

0

page imported parking

0

Error counters

count skipped due to recent timeouts on the same server IP

0

count content received but rejected due to 11-799

0

count dns errors

0

count cert errors

0

count timeouts

0

count http 429

0

count http 404

0

count http 403

0

count http 5xx

0

next operation date

-

Server

server bits

server ip

-

Mainpage statistics

mp import status

20

mp rejected date

-

mp saved date

-

mp size orig

17242

mp size raw text

8062

mp inner links count

1

mp inner links status

20 (imported)

Open Graph

title

Introduction

description

We are a collection of researchers interested in using causal models to understand agents and their incentives, in order to design safe and fair AI algorithms. If you are interested in collaborating o

image

site name

Causal Incentives Working Group

author

updated

2026-03-02 02:19:52

raw text

Introduction | Causal Incentives Working Group Causal Incentives Working Group We are a collection of researchers interested in using causal models to understand agents and their incentives, in order to design safe and fair AI algorithms. If you are interested in collaborating on any related problems, feel free to reach out to us. View My GitHub Profile Introduction For an accessible overview of of our work, see our blogpost sequence Towards Causal Foundations of Safe AGI . It builds on our UAI 2023 tutorial ( slides , video ). Papers Robust agents learn causal world models ( tweet summary , slides ) shows that a causal model is necessary for robust generalisation under distributional shifts. Jon Richens, Tom Everitt . ICLR, 2024 (Honorable mention outstanding paper award) The Reasons that Agents Act: Intention and Instrumental Goals ( tweet summary ): Formalises intent in causal models and connects it with a behavioural characterisation that can be appli...

Text analysis

redirect type

0 (-)

block type

0 (no issues)

detected language

1 (English)

category id

Zastosowania AI (149)

index version

1

spam phrases

0

Text statistics

text nonlatin

0

text cyrillic

0

text characters

6360

text words

1106

text unique words

481

text lines

172

text sentences

52

text paragraphs

14

text words per sentence

21

text matched phrases

0

text matched dictionaries

0

RSS

rss path

rss status

1 (priority 1 already searched, no matches found)

rss found date

-

rss size orig

0

rss items

0

rss spam phrases

0

rss detected language

0 (awaiting analysis)

inbefore feed id

-

inbefore status

0 (new)

Sitemap

sitemap path

sitemap status

1 (priority 1 already searched, no matches found)

sitemap review version

2

sitemap urls count

0

sitemap urls adult

0

sitemap filtered products

0

sitemap filtered videos

0

sitemap found date

-

sitemap process date

-

sitemap first import date

-

sitemap last import date

-