Indeed, seeing as there is no robots.txt file to begin with, adding one for that makes
sense.
I don't entirely trust robots.txt but having one can only be better than not.
Caleb
On 01/08/2012 07:22 AM, Denis Gervalle wrote:
Why not simply use robots.txt, its is made for this,
no ?
On Sun, Jan 8, 2012 at 02:07, Caleb James DeLisle <calebdelisle(a)lavabit.com
<mailto:calebdelisle@lavabit.com>> wrote:
I noticed
xwiki.org <http://xwiki.org> lagging and java process using lots of
cpu, although ram was high it didn't seem to be a major issue.
I noticed this:
180.76.5.100 - - [07/Jan/2012 <tel:2012>:21:13:00 +0100] "GET
/xwiki/monitoring?part=graph&graph=sql34bb8b9d9f525e0790dab487491d120d4bc685cd&period=annee
HTTP/1.1" 500 398 <tel:500%20398> "-" "Mozilla/5.0 (compatible;
Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"
And seeing as it seems you can do things like heap dump and run garbage collector
with get requests, that page should probably be made difficult to access.
One idea that comes to mind would be using mod_rewrite to change the url if there is
no auth cookie set so any logged in user can view it but a roving bot can't.
Caleb
_______________________________________________
infra mailing list
infra(a)xwiki.org <mailto:infra@xwiki.org>
http://lists.xwiki.org/mailman/listinfo/infra
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
_______________________________________________
infra mailing list
infra(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/infra