Interesting I did some simple instrumentation of #template and for the
page Sandbox.WebHome we get:
(results here
)
Time frequentlyUsedDocs.vm: 3
Time deprecatedVars.vm: 1
Time xwikivars.vm: 11
Time layoutExtraVars.vm: 1
Time layoutvars.vm: 7
Time colorThemeInit.vm: 2
Time stylesheets.vm: 5
Time analytics.vm: 0
Time javascript.vm: 9
Time htmlheader.vm: 36
Time menuview.vm: 19
Time global.vm: 3
Time header.vm: 4
Time startpage.vm: 78
Time contentmenu.vm: 6
Time frequentlyUsedDocs.vm: 1
Time deprecatedVars.vm: 1
Time xwikivars.vm: 7
Time hierarchy.vm: 25
Time titlevars.vm: 2
Time shortcuts.vm: 2
Time contentview.vm: 37
Time frequentlyUsedDocs.vm: 1
Time deprecatedVars.vm: 1
Time xwikivars.vm: 7
Time documentTags.vm: 12
Time frequentlyUsedDocs.vm: 1
Time deprecatedVars.vm: 1
Time xwikivars.vm: 9
Time commentsinline.vm: 12
Time docextra.vm: 15
Time leftpanels.vm: 1
Time rightpanels.vm: 50
Time footer.vm: 2
Time htmlfooter.vm: 0
Time endpage.vm: 54
Time view.vm: 216
in Firebug the page loads in 10ms more than view.vm
As we can see:
- the panels (quick links and recent changes) cost 50ms -> 25%
- startpage cost 78ms -> 30%
- breadcrumb cost 25ms -> 12%
- some templates are repeated (on repeat is dur to AJAX, the other not)
- we have 37 templates called
If we implement caching in panels, breadcrumb and part of the start page
we could win 33% of the general time of the skin.
If we win 1ms per template run, we can win 15% of the general time of
the skin.
The results on the home page (2 to 3 seconds), show that we ought to
look at dynamic code of course as the main slow-down. A panel with a
list of changes or of categories is way more costly than the whole skin.
The dashboard page is even more costly.
A long Syntax 2.0 page is also quite costly.
So implementing caches on all this is a good way to keep performance good.
Ludovic
Le 05/03/11 23:56, Ludovic Dubost a écrit :
Good points Paul,
While I was working on a first report (
http://dev.xwiki.org/xwiki/bin/view/Design/PageLoadTimeReport30SnapShot1
), I also realized that I did not mention velocity enough here.
I do have caches in mind to improve performance (we have the cache
macro in the 3.0 trunk and in the 2.7 branch), but it's true I did not
mention it.
One reason is that a lot of the velocity time is not in the page
itself but in the main templates, and it does not seem so easy to
cache that part, since almost all templates have context specific
results (based on the user).
However maybe we ought to look into that more and maybe reorganize the
templates between those that are always giving a stable results and
those that don't. I was thinking that some templates could report that
they can be cached. It's probably true for pages too which could be
reported by their editors as fully cachable.
In any case, what's sure is that we do need a good analysis of the
time spent in velocity and in the templates and the load it generates
on the server. In the end we do suffer from rerunning the same
velocity over and over again, even though it will always give the same
result.
It's true also that it would make sense to provide tools to measure
the performance of the application that is built with XWiki, not only
the base product.
I'll wait for more feedback and we'll improve the plan.
Ludovic
Le 05/03/11 22:02, Paul Libbrecht a écrit :
Ludovic,
First, one of the central performance gainers on the web is the usage
of Caches.
I see nothing of that mentioned there and it should definitely be
mentioned I feel.
Providing a system where velocity macros and pages can return that
they have not been modified since the given time (that the browser
indicates) would make probably more than 50% of the xwiki-loaded
pages be instantaneously displayed.
This sure should be measured. It'd be a comparison between what would
happen if such a clean if-modified-since treatment would exist and
what is actually done.
Secondly, another area where I think page-delivery time is too often
eaten in xwiki is at the lack of streaming. Thus far I can only
stream by outputting more velocity. I can't stream from a groovy page
that is called and, I fear, quite often velocity still calls toString
methods instead of streaming, say, a property value.
Again, it would be interesting to analyze this statistically. My
claim here, would be that this would lower the memory allocation
considerably hence the time taken to process.
Thirdly, removing unused JS and CSS is, to me, only one step and it
is highly desirable to have (integrated) tools that measure the
overlap of various CSS sources. The complexity of the CSS is one of
the places where Curriki is probably at its biggest difficulty.
Finally, the measures you indicate in this page (and also those that
I recommend) seem to be strongly application specific. It would be
rather nice to have re-runnable tests so that one can draw possibly
different test conclusions as part of an admin toolkit.
As a result, the objective of dividing by 2 seems quite artificial to
me, though certainly enjoyable; it should be there for each
application to apply.
paul
Le 5 mars 2011 à 10:14, Ludovic Dubost a écrit :
Hi,
He is a first draft of the investigation for "page load time" with a
proposed action plan:
http://dev.xwiki.org/xwiki/bin/view/Design/PageLoadTime
My next step will be to run a "manual" test and take some measures
and propose "obvious" improvements we could make if there are any.
Comments welcome. Questions are:
- are the goals ok
- are the measures the right ones
- can we run automated measures
- what is missing in this document
Ludovic
--
Ludovic Dubost
Blog:
http://blog.ludovic.org/
XWiki:
http://www.xwiki.com
Skype: ldubost GTalk: ldubost
<ludovic.vcf>_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs