id
processing priority
3
site type
0 (generic, awaiting analysis)
review version
11
html import
20 (imported)
first seen date
2024-10-17 15:07:57
expired found date
-
created at
2024-10-17 15:07:57
updated at
2026-02-17 02:46:44
length
22
crc
7226
tld
86
nm parts
0
nm random digits
0
nm rare letters
0
is subdomain of id
87719371 (github.io)
previous id
0
replaced with id
0
related id
-
dns primary id
0
dns alternative id
0
lifecycle status
0 (unclassified, or currently active)
deleted subdomains
0
page imported products
0
page imported random
0
page imported parking
0
count skipped due to recent timeouts on the same server IP
0
count content received but rejected due to 11-799
0
count dns errors
0
count cert errors
0
count timeouts
0
count http 429
0
count http 404
0
count http 403
0
count http 5xx
0
next operation date
-
server bits
—
server ip
-
mp import status
20
mp rejected date
-
mp saved date
-
mp size orig
24648
mp size raw text
10060
mp inner links count
8
mp inner links status
20 (imported)
title
description
Webpage of Jonas Geiping. Based on [*folio](https://github.com/bogoli/-folio) design.
image
site name
author
Jonas Geiping
updated
2026-02-15 02:25:41
raw text
Jonas Geiping Toggle navigation About Me (current) Group Research Code Openings CV Jonas Geiping Independent Research Group Leader ELLIS Institute & MPI-IS Tübingen, Germany ELLIS Institute Maria-von-Linden Straße 2 Hi, I’m Jonas. I am a ML researcher in Tübingen, where I lead the research group for safety- & efficiency- aligned learning (🦭). Before this, I’ve spent time at the University of Maryland and the University of Siegen. I am constantly fascinated by questions of safety and efficiency in modern machine learning. There are a number of fundamental machine learning questions that come up in these topics that we still do not understand well. In safety, examples are questions about the principles of data poisoning, the subtleties of water-marking for generative models, privacy questions in federated learning, or adversarial attacks against large language models. Or, more generally: Can we ever make these models “safe”, and how do we define this? A...
redirect type
0 (-)
block type
0 (no issues)
detected language
1 (English)
category id
AI [en] (229)
index version
2025123101
spam phrases
0
text nonlatin
0
text cyrillic
0
text characters
8096
text words
1506
text unique words
635
text lines
110
text sentences
62
text paragraphs
12
text words per sentence
24
text matched phrases
11
text matched dictionaries
6
links self subdomains
0
links other subdomains
links other domains
5 - unpkg.com, semanticscholar.org, ellis.eu, learning-systems.org
links spam adult
0
links spam random
0
links spam expired
0
links ext activities
7
links ext ecommerce
0
links ext finance
0
links ext crypto
0
links ext booking
0
links ext news
0
links ext leaks
0
links ext ugc
links ext klim
0
links ext generic
2
dol status
0
dol updated
2026-02-15 02:25:41
rss path
rss status
1 (priority 1 already searched, no matches found)
rss found date
-
rss size orig
0
rss items
0
rss spam phrases
0
rss detected language
0 (awaiting analysis)
inbefore feed id
-
inbefore status
0 (new)
sitemap path
sitemap status
40 (completed successful import of reports.txt file to table in_pages)
sitemap review version
2
sitemap urls count
14
sitemap urls adult
0
sitemap filtered products
0
sitemap filtered videos
0
sitemap found date
2024-10-17 15:07:58
sitemap process date
2024-10-17 15:07:58
sitemap first import date
-
sitemap last import date
2026-02-04 06:24:22