Vincent Massol wrote:
Hi everyone,
I don't know if you're like me but I have the strong feeling we should
be better at not introducing regressions. One of the goal of XE 1.3
and now again XE 1.4 was more stability and more automated tests.
Several potential problems are occurring now:
1) People using XWiki are expecting more stability when they upgrade.
This is due to the fact that XWiki is improving in general and more
people use it. It's expected that it'll just work and that's a
reasonable expectations
2) We've introduced several important regressions for the past *4*
releases (login/logout/RMUI/Escapes and more). We're following a bas
trend.
3) We've recently introduced several storage changes (Sergiu and
Artem) and I haven't seen tests that would prove what was working
before is still working. I'm not saying it isn't but last time we made
a change to the storage area it took us several months to stabilize it
and we cannot do that again.
4) We're committing more code than tests meaning the overall quality
of XWiki is degrading :(
Thus I'd like to propose that:
A) We become very very careful when committing things and we only
commit when we can *guarantee* that what we've done is working (with a
given level of confidence of course). This can only be achieved
through tests being committed at the same time as the code is committed
B) We stop putting stuff that are NOT critical in point releases. For
example dangerous changes were done in the WYSIWYG editor for 1.3.1
and I'm not confident this was a good thing, and certainly not with so
little tests and verifications since we know that whenever we touch
that editor we introduce problems elsewhere.
C) In general we reduce the number of changes that we commit and
instead we focus on tests and stability. This is indeed one of the
general goals for 1.4.
WDYT?
While I would like that very much, most of the time I find it almost
impossible to write (meaningful) tests. Sure, when dealing with user
interface features, we can write selenium tests to click here and there
and see what happens, and when dealing with algorithmic stuff we can
write a unit test to see if an input gives the expected output. But
there are things that can't be tested that way, because they need a
running wiki. For example, how am I to test the attachments storage?
Mocking the storage is not the right thing, as that is what I am
testing. Selenium tests are not suited for this, as I can't upload
files. I could abuse the selenium framework to write tests using
programming, mimicking the upload action, but that also requires too
much work and code duplication.
I hope that when we'll have everything as a component, testing will be a
lot easier.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/