repositorio.inprf.gob.mx

Inicio

Cambiar navegación Inicio JavaScript is disabled for your browser. Some features of this site may not work without it. REPOSITORIO Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz (INPRFM) El Repositorio Institucional tiene como objetivo ...


Alexa stats for repositorio.inprf.gob.mx

Compare this site to:

traffic alexa for repositorio.inprf.gob.mx

Site Seo for repositorio.inprf.gob.mx

Tag :
H1 H2 H3 H4 H5
1 7 0 0 0
Image : There are 4 images on this website and 3 images have alt attributes
Frame : There are 0 embed on this website.
Flash : There are 0 flash on this website.
Size : 29,596 characters
Meta Description : No
Meta Keyword : No

Magestic Backlinks for repositorio.inprf.gob.mx

Magestic Backlinks repositorio.inprf.gob.mx

About repositorio.inprf.gob.mx

Domain

repositorio.inprf.gob.mx

MD5

27a1b0f84c9cbd2a265e4a5ed51ce943

Google Analytic UA-Compatible
Charset UTF-8
Page Speed Check now
Web server Apache-Coyote/1.1
IP Address 132.247.16.23
					
								
		
# The FULL URL to the DSpace sitemaps
# The http://localhost:8080/jspui will be auto-filled with the value in dspace.cfg
# XML sitemap is listed first as it is preferred by most search engines
Sitemap: http://localhost:8080/jspui/sitemap
Sitemap: http://localhost:8080/jspui/htmlmap

##########################
# Default Access Group
# (NOTE: blank lines are not allowable in a group record)
##########################
User-agent: *
# Disable access to Discovery search and filters
Disallow: /discover
Disallow: /search-filter

#
# Optionally uncomment the following line ONLY if sitemaps are working
# and you have verified that your site is being indexed correctly.
# Disallow: /browse
# Disallow: /handle/123456789/*/browse
#
# If you have configured DSpace (Solr-based) Statistics to be publicly 
# accessible, then you may not want this content to be indexed
# Disallow: /statistics
#
# You also may wish to disallow access to the following paths, in order
# to stop web spiders from accessing user-based content
# Disallow: /contact
# Disallow: /feedback
# Disallow: /forgot
# Disallow: /login
# Disallow: /register


##############################
# Section for misbehaving bots
# The following directives to block specific robots were borrowed from Wikipedia's robots.txt
##############################

# advertising-related bots:
User-agent: Mediapartners-Google*
Disallow: /

# Crawlers that are kind enough to obey, but which we'd rather not have
# unless they're feeding search engines.
User-agent: UbiCrawler
Disallow: /

User-agent: DOC
Disallow: /

User-agent: Zao
Disallow: /

# Some bots are known to be trouble, particularly those designed to copy
# entire sites. Please obey robots.txt.
User-agent: sitecheck.internetseer.com
Disallow: /

User-agent: Zealbot
Disallow: /

User-agent: MSIECrawler
Disallow: /

User-agent: SiteSnagger
Disallow: /

User-agent: WebStripper
Disallow: /

User-agent: WebCopier
Disallow: /

User-agent: Fetch
Disallow: /

User-agent: Offline Explorer
Disallow: /

User-agent: Teleport
Disallow: /

User-agent: TeleportPro
Disallow: /

User-agent: WebZIP
Disallow: /

User-agent: linko
Disallow: /

User-agent: HTTrack
Disallow: /

User-agent: Microsoft.URL.Control
Disallow: /

User-agent: Xenu
Disallow: /

User-agent: larbin
Disallow: /

User-agent: libwww
Disallow: /

User-agent: ZyBORG
Disallow: /

User-agent: Download Ninja
Disallow: /

# Misbehaving: requests much too fast:
User-agent: fast
Disallow: /

#
# If your DSpace is going down because of someone using recursive wget, 
# you can activate the following rule.
#
# If your own faculty is bringing down your dspace with recursive wget,
# you can advise them to use the --wait option to set the delay between hits.
#
#User-agent: wget
#Disallow: /

#
# The 'grub' distributed client has been *very* poorly behaved.
#
User-agent: grub-client
Disallow: /

#
# Doesn't follow robots.txt anyway, but...
#
User-agent: k2spider
Disallow: /

#
# Hits many times per second, not acceptable
# http://www.nameprotect.com/botinfo.html
User-agent: NPBot
Disallow: /

# A capture bot, downloads gazillions of pages with no public benefit
# http://www.webreaper.net/
User-agent: WebReaper
Disallow: /
		

Cross link