On Dec 8, 2011, at 10:21 AM, Denis Gervalle wrote:
On Thu, Dec 8, 2011 at 09:52, Vincent Massol
<vincent(a)massol.net> wrote:
On Dec 8, 2011, at 9:01 AM, Thomas Mortagne wrote:
Hi devs,
Now that master just moved to 3.4-SNAPSHOT I would like to merge my
refactoring of the component manager. You can find the branch on
https://github.com/xwiki/xwiki-commons/tree/feature-improvecm.
The rational is that it's then going to be indirectly tested during
the whole 3.4 timeframe. Never too careful with the most critical
code.
I already detailed this on another mail but the major difference with
current implementation is that it's locking a lot less and since CM is
pretty heavily used (and is going to be used more and more) it should
make a noticeable difference. It also fix several bugs I found while
doing this refactoring and covering it with tests.
Here are the related jira issues:
*
http://jira.xwiki.org/browse/XCOMMONS-63
*
http://jira.xwiki.org/browse/XCOMMONS-65
*
http://jira.xwiki.org/browse/XCOMMONS-64
*
http://jira.xwiki.org/browse/XCOMMONS-66
Here is my +1
+1
How are we going to measure the performance improvements?
Do we really need to waste time on measuring that precisely ? Do you see
any reason for it to be really worse ?
Well there are several things involved here:
* verifying it works. I don't think Thomas has verified that yet (I mean in heavy
multithreaded situations). Better doing it in a test than waiting for it to happen live.
Also it's quite simple to write as can be testified by my example.
* I don't see how testing could be bad
* It's interesting to see if we win times. Because we've seen with a profiler that
the whole time spent in ECM is about 15% of the overall time which is quite significant
and yes it's good to know if we're improving this or not because if we do then we
can advertise it in the release notes. I prefer knowing than just hiding my head in the
sand.
* It provides a baseline against which we can measure how we progress now in the future
I'd
propose that we add a performance unit test so that we can compare the
2 implementations.
I can think of at least 2 tools for this:
* ContiPerf:
http://databene.org/contiperf
I had written a quick minimalist test here:
http://jira.xwiki.org/browse/XWIKI-6164?focusedCommentId=59460&page=com…
* Tempus-fugit:
http://code.google.com/p/tempus-fugit/wiki/Documentation?tm=6
ContiPerf seems the best to me.
We wouldn't run this test as part of the main test suite but it could be
either run manually or triggered by a maven profile.
WDYT?
The change mades are all favorable to a performance improvement, but also
improve code quality and maintenance, so knowing precisely how much we
really get does not seems important to me. I do not really see the added
value for end users.
Errr?
So you're suggesting not to write tests anymore because they don't bring value to
end users…. Come on, Denis, I thought you were better than that! :)
Thanks
-Vincent
> Thanks
> -Vincent