Hi devs,
As part of the STAMP research project, we’ve developed a new tool (Descartes, based on Pitest) to measure the quality of tests. It generates a mutation score for your tests, defining how good the tests are. Technical Descartes performs some extreme mutations on the code under test (e.g. remove content of void methods, return true for methods returning a boolean, etc - See https://github.com/STAMP-project/pitest-descartes). If the test continues to pass then it means it’s not killing the mutant and thus its mutation score decreases.
So in short:
* Jacoco/Clover: measure how much of the code is tested
* Pitest/Descartes: measure how good the tests are
Both provide a percentage value.
I’m proposing to compute the current mutation scores for xwiki-commons and xwiki-rendering and fail the build when new code is added that reduce the mutation score threshold (exactly the same as our jacoco threshold and strategy).
I consider this is an experiment to push the limit of software engineering a bit further. I don’t know how well it’ll work or not. I propose to do the work and test this for over 2-3 months and see how well it works or not. At that time we can then decide whether it works or not (i.e whether the gains it brings are more important than the problems it causes).
Here’s my +1 to try this out.
Some links:
* pitest: http://pitest.org/
* descartes: https://github.com/STAMP-project/pitest-descartes
* http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
* http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes
If you’re curious, you can see a screenshot of a mutation score report at http://massol.myxwiki.org/xwiki/bin/download/Blog/MutationTestingDescartes/…
Please cast your votes.
Thanks
-Vincent
Hi developers.
I am trying to add a new filter to the notifications to be able to follow
pages
that are marked with a given tag. And it leads me to some questions about
the
technical implementation of the notifications.
To remind the context: notifications are computed on top of the events
recorded
by the event stream (a.k.a activity stream). We take events from the event
stream SQL table, we apply some transformations on them, and we display
them to
the user.
Then we have implemented the ability to filter on these events: for example
"don't show events concerning the document A nor the wiki B". Filters are
implemented with 2 distinct ways:
1/ SQL injections: each filter can add SQL elements in the query we make
to
fetch the events from the event stream table. We made this mechanism
so we
can let the database do a lot of the filtering process. After all,
it's its
job and it's supposed to perform well. To be precise, Clement has even
created an Abstract Syntax Tree (AST) so it's easier to inject some
content
in the query and it creates an abstraction over the SQL language so we
can
even consider to change the storage of the event stream someday.
The bad thing is that some complex filtering are difficult to write
with
the SQL language (event with the AST) or even impossible.
2/ Post-filtering: after the events have been fetched from the database,
each
filter can still decide to keep or filter them. This is useful for
complex filtering that cannot be expressed with the SQL language. It
is
also needed by the real-time notification email sender, because it
takes
the events immediately when they occurs without fetching them in the
database (so SQL filters are bypassed).
The bad thing is that some events are loaded in the memory to finally
be
rejected, and these filters can perform costly operations such as
loading
documents.
Until now, this double mechanism was working quite well, with each
mechanism
filling the lacks of the other.
However, we still have technical limitations in our design:
1/ Users who have a lot of filter preferences can end up with a giant SQL
query that is almost impossible to perform by the database. Actually
we had
a user complaining about an OutOfMemory problem in the HQL to SQL
translator !
2/ I cannot implement the tag filter !
The tag filter is supposed to show events that concern pages that hold a
given
tag, EVEN IF THE PAGE WAS EXCLUDED BY THE USER. Example of use-case: "I
don't
want to receive notifications about wiki A except for pages marked with the
tag
T".
And it is not working. First because it is difficult to write a SQL query
for
that. It requires to make a join clause with the document and the object
tables,
which our SQL injection mechanism does not support. Even if it were
possible,
creating a SQL join with the document table will de-facto filter events
that do
not concern any pages or pages that do not have any objects: so many other
filters will be broken. I also don't consider creating a SQL subquery, I
think
the whole query would became too big. So I decided to just not inject any
SQL
code for this filter and only implement the post-filtering mechanism.
But the other filter "EXCLUDE WIKI A" generates a SQL injection such as
"WIKI <> 'WIKI A'" so the events concerning the wiki A are not fetched from
the
database. Consequence: the tag filter never see the events that it is
supposed
to keep. It would be actually possible to by-pass the first SQL injections
by
injecting something like "OR 1=1". But doing something like this is like
dropping the all SQL injections mechanism.
I see some solutions for this problem:
A/ For each tag, create a permanent list of pages that hold it. So I can
inject "OR document IN (that_list)". I think this is heavy.
B/ Drop the SQL injection mechanism and only rely on the post-filtering
mechanism. It would require to load from the database A LOT of events,
but maybe we could cache this.
C/ Don't drop the SQL injection mechanism completely but use it as little
as
possible (for example, do not use it for LOCATION filtering). Seems
hard to
determine when a filter should use this feature or not.
D/ Don't implement the "tags" filter, since it is the root of the issue.
But
it is like sweeping dirt under the carpet!
Since we have the OutOfMemory problem with the SQL injections becoming too
huge,
I am more in favor of solution B or C. But I'm not sure for now, since I do
not
know how much it would impact the performances and the scalability of the
whole
notifications feature.
This is a complex topic, but I hope this message will inspire you some
suggestions or things I have not seen with my own eyes.
Thanks for your help,
Guillaume
Hi devs,
I’d like to give you some info about what I’ve started working on and verify you like the direction I’m proposing to take for the future of functional testing on the xwiki project.
Needs
=====
* Be able to test xwiki on multiple environments
Context
======
* Right now we test only in 1 env (Jetty+HSQLDB)
* I've started some docker images in xwiki-contrib
* I’ve also started some experiment through https://jira.xwiki.org/browse/XWIKI-14929 and https://jira.xwiki.org/browse/XWIKI-14930 (see also email thread "[Brainstorming] Implementing multi-environment tests - Take 2” and https://github.com/xwiki/xwiki-platform/compare/XWIKI-14929-14930). This email supersedes the "[Brainstorming] Implementing multi-environment tests - Take 2” thread.
* Initially I imagined doing the multi env testing in Jenkins thanks to the Jenkins Docker plugin/library. However I realized that it would be better to be able to run that on the dev machines and thus decided instead to implement it at the maven level thanks to the Fabric8 Maven plugin.
Proposal
=======
* The new proposal is to stop trying to do it at the maven level and instead do it at the Java level, i.e. be able to control (start/stop the various docker images for the DB, Servlet Container/XWiki and the Browser from within the java junit/selenium tests).
* There are several java libraries existing to control docker from within java. For example: https://github.com/docker-java/docker-java
* I got convinced when finding this awesome library that combines JUnit5/Selenium and Docker for multi-browser testing: https://bonigarcia.github.io/selenium-jupiter/
** Note that this relies on the browser docker images provided by the Selenoid project: https://aerokube.com/selenoid/latest/
* So the idea is to extend that to be able to control the other 2 docker containers for the DB + ServletContainer/XWiki.
Pros
====
* Very simple setup to start/stop functional tests (and to debug them). Only requires Docker to be installed locally.
* Very simple to test any combination of DB/Servlet Container/Browser.
* Always up to date images with the latest version (we can depend on LATEST of Browser images, MySQL, Tomcat, etc).
* Using JUnit5 and thus the latest features
* Moving to the latest Selenium version too
* Also supports manually executing tests in a given running xwiki instance
Implementation
============
Something like:
--> XWikiSeleniumExtension extends SeleniumExtension
@ExtendWith(XWikiSeleniumExtension.class)
public class Test
@Test
public void xxx(XWikiWebDriver driver)
{
…
}
And be able to configure the DB to use, the Servlet container to use, and the packaging to use from system properties (and also from the test itself, see https://bonigarcia.github.io/selenium-jupiter/#generic-driver).
The idea is to reimplement the XWiki Packaging Maven plugin as a java lib using Aether and to just start our functional tests using pure junit without anymore more. All the hard work will be performed by the JUnit5 extension (create the packaging if not already exist, update some part of it if files have been modifier, start/stop DB+Servlet+Browser+Selenium, download the docker images).
The packaging will be configurable. Some ideas of options:
* use an already running xwiki instance
* docker created from full XS zip from URL
* docker created from XS zip from maven artifact
* docker created from computed based on pom in current dir
Migration
=======
Once a first version is working, it’ll be easy to use it only for a single platform functional tests and then slowly move each module to use the new way for its functional tests.
WDYT?
I’m planning to continue my investigation/development of this. So please let me know if you have feedback.
Thanks
-Vincent
Hi devs,
I'm making a rest resource to get a list of pages and, for a query, I
want to specify an icon (as a metadata) for each pages in the resulted
json.
The problem is that the icon APIs (and more specifically the
IconManager class) only allow us to render the icon in HTML or
velocity and this shouldn't be put inside a json response.
Also we can't hardcode the icon class or image URL to be used as it
depends on the iconset configured for the wiki. Another possibility
would be to render the icon using javascript but it will not be very
efficient.
As discussed with Marius, our proposal would be to add a new method to
the IconManager to get either the icon URL (e.g.
http://xwiki.org/xwiki/resources/icons/silk/page.png) or the icon
class (e.g. fa fa-page) depending of the specified iconset.
We could then have this new property to the icon theme definition:
## Silk
xwiki.iconset.render.json=$xwiki.getSkinFile("icons/silk/${icon}.png")
## FontAwesome
xwiki.iconset.render.json=fa fa-$icon
We could name the new method renderJSON or something more generic (if
you have any idea).
WDYT?
Thanks,
Adel
Hi devs!
I would like to propose a new extension called
application-extramacrocontent.
Currently, macros can only provide one content field, that generally
contains the main body of the macro. Parameters are used to pass
additional information to the macro but are not the best to put large
text in them.
With this application, it would be possible to add multiple content
fields inside the macro, therefore adding new possibilities to macro
creation and leveraging the problem of passing multiple big inputs to
the wiki macros.
I'm not really sure about the name of the extension itself, so if you
have any better idea, please let me know :)
Thanks,
Clément
Hi devs,
One error we see during upgrades is https://www.xwiki.org/xwiki/bin/view/Documentation/UserGuide/Features/Distr…
I was wondering if we couldn't fix it for example by storing the database version also in the status.xml file for example (and update it when there are migrations) and at startup if we see a mismatch between the version in the DB and the version in the status.xml file, then would start the DW, asking the user if they are performing an upgrade.
I’m mentioning this in the context of trying to make the install/upgrade process as simple as possible for users, and even correcting their mistakes when we can.
WDYT? good idea? bad idea? Any other idea?
Thanks
-Vincent
The XWiki development team is proud to announce the availability of XWiki
10.5.
This release improves the visibility of the save button in edit mode and
completes retro-compatibility of the new Notifications with the old
Activity Stream by handling messages. Admins get more options on deciding
how to protect extension pages from user modifications but also easy
customization options for the Navigation panel.
You can download it here: http://www.xwiki.org/xwiki/bin/view/Main/Download
Make sure to review the release notes:
http://www.xwiki.org/xwiki/bin/view/ReleaseNotes/Data/XWiki/10.5
Thanks for your support
-The XWiki dev team