timeschliman.com

Tim Eschliman - Musician, Songwriter, Producer, Web Developer, Vinyl DJ

Musician, Songwriter, Producer, Web Developer, Vinyl DJ Musician Back pOrchEstra Acoustic Roots Rock, Singer/Songwriter, Roots Rock, Americana with... Tim Eschliman, Bowen Brown, Candy Girard... New lockdown music by newly formed Shelter-in-Place-ins ...


Alexa stats for timeschliman.com

Compare this site to:

traffic alexa for timeschliman.com

Site Seo for timeschliman.com

Tag :
H1 H2 H3 H4 H5
0 0 0 0 0
Image : There are 5 images on this website and 1 images have alt attributes
Frame : There are 0 embed on this website.
Flash : There are 0 flash on this website.
Size : 11,868 characters
Meta Description : No
Meta Keyword : No

Magestic Backlinks for timeschliman.com

Magestic Backlinks timeschliman.com

About timeschliman.com

Domain

timeschliman.com

MD5

d3aeac8d2965fb37012edc7bb3a003f0

Charset ISO-8859-1
Page Speed Check now
Web server Apache
IP Address 66.29.145.250
					
								
		
# robots.txt for http://www.globerecords.com/
## 7-8-07 upped slurp to 15, added msnbot linit to 10,  upped voila to 15, added google delay of 6
# last out of memory error was june 18, 2007

#Additional Symbols
#Additional symbols allowed in the robots.txt directives include:
#
#'*' - matches a sequence of characters
#'$' - anchors at the end of the URL string
#
#Using Wildcard Match: '*'
#A '*' in robots directives is used to wildcard match a sequence of characters in your URL. You can use this symbol in any part of the URL string that you provide in the robots directive.
#
#Example of '*':
#  User-agent: Slurp
#  Allow: /public*/
#  Disallow: /*_print*.html
#  Disallow: /*?sessionid
  
User-agent: *
Disallow: /cgi-bin/ # 
Disallow: /fm/ # 
Disallow: /manager/ # 
Disallow: /plugins/ #
Disallow: /images/ # 
Disallow: /users/ 
Disallow: /backup/ # 

User-agent: Slurp #yahoo
Crawl-delay: 50

User-agent: Fatbot #thefind
Crawl-delay: 40

User-agent: msnbot
Crawl-delay: 40

User-Agent: Twiceler
Crawl-delay: 40

User-Agent: Googlebot
Crawl-delay: 20

User-Agent: Charlotte  #searchme trying crawl-delay instead of disallow, starting 6-19-08
Crawl-delay: 50
#Disallow: /

User-agent: ShopWiki
Crawl-Delay: 40

User-agent: ia_archiver
#Disallow: /cgi-bin
Disallow: /test/ # test area
Disallow: /demo/ # client demos
Disallow: /includes/ 
Disallow: /tda/ # 
Disallow: /scripts/ # 

User-agent: Teoma
#Disallow: /cgi-bin
Disallow: /test/ # test area
Disallow: /demo/ # client demos
Disallow: /includes/ 
Disallow: /tda/ # 
Disallow: /scripts/ # 

User-agent: NuSearch Spider
Disallow: /cgi-bin
Disallow: /test/ # test area
Disallow: /demo/ # client demos
Disallow: /includes/ 
Disallow: /tda/ # 
Disallow: /scripts/ # 

User-agent: VoilaBot
Crawl-Delay: 30
Disallow: /cgi-bin
Disallow: /test/ # test area
Disallow: /demo/ # client demos
Disallow: /includes/ 
Disallow: /tda/ # 
Disallow: /scripts/ # 

User-agent: Gigabot
Crawl-Delay: 30
Disallow: /cgi-bin  maybe have to deactivate this
Disallow: /test/ # test area
Disallow: /demo/ # client demos
Disallow: /includes/ 
Disallow: /tda/ # 
Disallow: /scripts/ # 
		

Cross link