On 07/20/2011 09:19 AM, brian wrote:
Ok.
This may not be the first time it happens...
My XWiki instance is open on the internet, and it got indexed by web spider
- which managed to access URLs like "GET /xwiki/bin/save/Thing".
I was about to suggest that you check if there's a crawler that
accidentally deletes stuff.
So, several things:
- I obviously have a permission problem with some pages
- I should have used the robots.txt specified there
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances (even
though it says for "performance reasons", and not security reasons)
The safest thing to do is to fix permissions.
- But *why on earth* would Xwiki let edit resources
with GET requests? Isn't
it against all REST principles?
Well, it's not quite such a simple problem. REST principles are OK when
it comes to machines and software, but REST isn't designed with human
users (behind browsers) in mind, a very quick example being that
browsers can't issue PUT and DELETE commands directly (only via scripts).
Not caring whether a request comes via a GET or a POST command makes it
so much simpler to write code. Actually, it's the default behavior of
Struts to ignore this; while basic servlets do make a distinction, by
offering doGet() and doPost() methods, Struts Actions only have
execute(). But even in the servlet specifications, a Request has
getParameter, and not getPostParameter and getGetParameter. One of my
bigger dislikes in PHP is the _GET and _POST vectors which enforce the
distinction between get and post parameters.
Personally, being very fond of keyboard input, I often just use the
browser's location bar to quickly update documents, manually entering
/save/ or /delete/ URLs, which are obviously fetched via GET. And I love
that this used to work until recently.
The upcoming 3.2 release will actually partially fix this problem by
enabling CSRF protection via secure tokens. Crawlers won't have this
token, so /save/ URLs won't work unless they have this token. And most
URLs reachable from the HTML source will be safe.
But yes, this should be solved.
Actually, I think the problem is not /save/ actions, since such URLs
shouldn't be found in the HTML source, except in several places where it
stands behind some "update" actions, like "hide blog post". The
problem
is /delete/ actions, since from your description it looks like the
Panels.PanelClass and Blog.BlogPostClass classes have been deleted. This
is actually a bug, since the delete action should be confirmed via a
form with real post buttons, but the UI (HTML source) is actually using
some <a> links around the post buttons with the supposedly "safe" URL in
clear.
I find it very disturbing.
--
View this message in context:
http://xwiki.475771.n2.nabble.com/database-xwiki-objects-corruption-tp66018…
Sent from the XWiki- Users mailing list archive at
Nabble.com.
_______________________________________________
users mailing list
users(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/users