id
related bits
0
processing priority
4
site type
3 (personal blog or private political site, e.g. Blogspot, Substack, also small blogs on own domains)
review version
11
html import
20 (imported)
first seen date
2024-10-14 06:08:50
expired found date
-
created at
2024-10-14 06:08:50
updated at
2026-03-03 00:22:57
length
25
crc
24245
tld
2211
nm parts
0
nm random digits
0
nm rare letters
0
is subdomain of id
13642151 (wordpress.com)
previous id
0
replaced with id
0
related id
-
dns primary id
0
dns alternative id
0
lifecycle status
0 (unclassified, or currently active)
deleted subdomains
0
page imported products
0
page imported random
0
page imported parking
0
count skipped due to recent timeouts on the same server IP
0
count content received but rejected due to 11-799
0
count dns errors
0
count cert errors
0
count timeouts
0
count http 429
0
count http 404
0
count http 403
0
count http 5xx
0
next operation date
-
server bits
—
server ip
-
mp import status
20
mp rejected date
-
mp saved date
-
mp size orig
111694
mp size raw text
11723
mp inner links count
19
mp inner links status
20 (imported)
title
PrayogShala
description
Innovation is what distinguishes a leader from a follower - Steve Jobs
image
site name
PrayogShala
author
updated
2026-02-24 07:19:01
raw text
PrayogShala | "Innovation is what distinguishes a leader from a follower" – Steve Jobs PrayogShala "Innovation is what distinguishes a leader from a follower" – Steve Jobs Main menu Skip to content Home About Format Code Differences Single Line Shell Scripting ZQL Search GO RSS Feed Twitter September 20, 2014 taT4Py | Recursively Search Regex Patterns [UPDATE: 09/28/2014] I have mainly used python for text parsing, validation and transforming as needed. If it was done using shell script, I would end up writing variety of regular expression to play around. Getting Started Well, python is no different and in order to cook up regular expressions, one must import re (module) and get started. import re So far, I have been able to use the patterns exactly the same way as I would with grep or sed. Usually, I end up writing multiple search patterns, as the script evolves. While using python, I find it intuitive to create dictionary of compiled searc...
redirect type
0 (-)
block type
0 (no issues)
detected language
1 (English)
category id
230
index version
2025123101
spam phrases
0
text nonlatin
0
text cyrillic
0
text characters
7769
text words
1655
text unique words
541
text lines
368
text sentences
52
text paragraphs
29
text words per sentence
31
text matched phrases
1
text matched dictionaries
2
links self subdomains
0
links other subdomains
5 - docs.python.org
links other domains
0
links spam adult
0
links spam random
0
links spam expired
0
links ext activities
0
links ext ecommerce
0
links ext finance
0
links ext crypto
0
links ext booking
0
links ext news
0
links ext leaks
0
links ext ugc
30 - s0.wp.com, wp.me, s1.wp.com, wordpress.com, twitter.com
links ext klim
0
links ext generic
0
dol status
0
dol updated
2026-02-24 07:19:01
rss status
32 (unknown)
rss found date
2024-10-14 06:08:51
rss size orig
23381
rss items
14
rss spam phrases
0
rss detected language
1 (English)
inbefore feed id
-
inbefore status
0 (new)
sitemap path
sitemap status
40 (completed successful import of reports.txt file to table in_pages)
sitemap review version
2
sitemap urls count
47
sitemap urls adult
0
sitemap filtered products
0
sitemap filtered videos
0
sitemap found date
2024-10-14 06:08:51
sitemap process date
2024-10-14 06:08:51
sitemap first import date
-
sitemap last import date
2025-12-26 03:25:49