Hi devs,
As you probably know, UsersClass and GroupsClass overwrite the newProperty
implementation from ListClass and hard code the usage of
LargeStringProperty. The implementation from ListClass takes into account
the relational storage and multiple selection meta properties while
UsersClass and GroupsClass are completely ignoring them. Do you have any
idea why? It's been like this for more than 11 years..
The hard coded LargeStringProperty was introduced in
https://github.com/xwiki/xwiki-platform/commit/e4800bd2ebf97d2e12282ab56ff5…
even though at that point ListClass#newProperty was already taking into
account relational storage.
As for the hard coded StringProperty that was before it, it is since the
start of the XWiki history I can access on GitHub, same as the
ListClass#newProperty implementation that take into account relational
storage.
So I have no idea why we had to overwrite ListClass#newProperty in
UsersClass and GroupsClass.
The big problem is that in the current state the Users and Groups
properties cannot be filtered in Oracle because they are stored as CLOB.
See http://jira.xwiki.org/browse/XWIKI-14634 and
https://jira.xwiki.org/browse/XWIKI-15500 .
Fixing this by removing UsersClass#newProperty and GroupsClass#newProperty
requires a migrator and breaks existing queries that join the
LargeStringProperty table to get the users and groups values. Is it
acceptable to break those queries? I'm afraid there are quite a lot of
them, especially since we have examples of such queries on
https://extensions.xwiki.org/xwiki/bin/view/Extension/Query+Module .
WDYT?
Thanks,
Marius
Hi, we are running into an issue with our Wiki security cache where we notice
a large dump of the cache every 4 hours. The reason this happens every 4
hours is because we have configured out infinispan expiry time to 4 hours;
however, we would expect a gradual expiration of the cache.
After investigating the DefaultSecurityCache logic, we discovered that the
dispose() method is being called whenever infinispan attempts to expire a
security cache entry. When disposing an entry, it will rightfully disconnect
the entry from its parent(s) and remove all children.
What happens is that the XWiki root page ("xwiki:XWiki") is one of the first
entries to be created. As such, it is one of the first entries to be
expired. When it is expired, it removes all children from the cache as well.
This results in all user pages ("xwiki:XWiki.user1", "xwiki:XWiki.user2",
...) as well as our permission groups (stored under
"xwiki:XWiki.POSIX.group1", "xwiki:XWiki.LDAP.group2", ...). This also
removes all children linking to documents ("xwiki:user1@@Document").
In order to fix this, we were planning on adding logic around the
DefaultSecurityCache.dispose() method to skip disposal of certain core pages
("xwiki:XWiki.POSIX", and "xwiki:XWiki.LDAP") as there is no real benefit of
removing them from the security cache, but need to further investigate
possible risks or side effects.
Some questions we had for the XWiki dev team were:
1. Where are XWiki groups stored? Are they not under the document
"xwiki:XWiki.group1"?
2. If they are stored under a shared parent, shouldn't they run into the
same issue as we are experiencing where clearing the "xwiki:XWiki" entry
also removes all user entries and groups?
--
Sent from: http://xwiki.475771.n2.nabble.com/XWiki-Dev-f475773.html
Hi.
[TL;DR]
This thread is about the way we store notification filter preferences for
each user. The constraint is there can be a lot of them (700 is a number a
user has recently reported). So how should we store them?
[Full text]
= Definition =
So what is a filter preference? It's a generic object that can store many
elements, such as a page locations, application names, event types, etc...
They describe a configuration about a given filter for a given user. For
example, a filter preference can say "for the ScopeNotificationFilter and
the user A, include the location Main.WebHome" as it could be "for the
UserNotificationFilter and the user A, exclude the user SPAM". It's generic.
The main usage is for page locations (ScopeNotificationFilter). By default,
we have the "autowatch" mode enabled. It means every time a user modifies a
page, a filter preference for this page and this user is created. So if a
user modifies 700 pages, he gets 700 filter preferences.
= How are they stored =
Currently, we have a simple implementation. There is a generic XClass
called "XWiki.Notifications.Code.NotificationFilterPreferenceClass". For
each preference, we add an XObject on the user page. It's that simple. But
it also means that if a users have 700 filter preferences, she also gets
700 XObjects on her page, and 700 revisions of that page. Which is a pain:
it takes a lot of place in the document's cache, and it's heavy to load
(lot of SQL queries needed). So we have a big problem here.
= Possible solutions =
== A: Minimize the number of xobjects needed for ScopeNotificationFilter ==
Currently, one location is represented by 1 filter preference. But most
filter preferences are very similar. They almost all say "for the
ScopeNotificationFilter, for all event types, for all applications, the
filter preference is enabled". The only different part is the actual
location. But the "location" field is itself a LIST stored with the
"relational storage" option. So we can take advantage of it and store
similar preferences into 1 single object.
1 object with 700 locations instead of 700 objects with 1 location.
However, it's a bit harder than this. Event if the
NotificationFilterPreferences is generic and can contains many locations,
the ScopeNotificationFilter expect it to concern only one location (and
then it perform complex operations to sort the filters preferences
according to a hierarchy). The UI in the user profile makes the same
assumption so it does not handle multiple locations in the same preferences
object. Refactoring this is not simple and cannot be done for 10.6.
=== Variation 1: store only 1 xobject, but make the API return 700
preferences objects anyway ===
This is the variation I am prototyping. Actually it's ok if the filters and
the UI expect only 1 location into the preferences object. All we have to
do is to "smash" the xobject into many NotificationFilterPreferences
objects that we need internally. It would simply be the responsibility of
the Store to detect similarities and to save the minimal amount of XObjects
to store a bunch of preferences.
But it means being very smart when loading, creating, updating and deleting
a preference. Not having one xobject per filter preference introduces
complexity, and complexity can lead to bugs. Again, according to the time
frame, it's hard to implement.
=== Variation 2: use custom mapping ===
Probably the easiest solution that would help making less SQL queries. The
idea is to have a SQL table for notification filter preferences and bind
the XObjects to that table. It would still use a lot of place in the
document's cache but be more efficient on the database level.
=== Other Problem 1: it still creates page revisions ===
As long as we store the filter preferences with xobjects, we create page
revisions. We can get rid of those by using some internal API to not create
a revision when we save an xobject but I wonder if it's what users want. If
a user tries to rollback some changes and don't see all filter preferences
it concerns, I think it's not very transparent.
=== Other Problem 2: Document's cache ===
Sometime we load the a user document to get the avatar of the user, her
name, etc... So we load user documents very frequently, even if the user is
not connected! Having 700 filters in the document and cache them with the
document even if we don't need them is a big waste of memory.
== B: Implement a completely new store with Hibernate ==
A bit like having a custom mapping. We could create a SQL table and
implement an API to handle it. Then, no xobjects would be involved.
Some drawbacks:
* we need to write a custom cache as well.
* the user cannot modify her preferences using the wiki principles
(xobjects all the way).
== C: Refactor the UI and the ScopeNotificationFilter so they do not assume
1 filter preference = 1 location ==
This option is still possible. Probably the best because creating 1 filter
preferences object per location is an obvious waste of memory. A
refactoring of the UI is needed anyway, because we currently have no way to
remove a bunch of filter preferences easily (users have to delete the 700
filters preferences manually) so we can kill 2 birds with the same stone.
But again, it requires some work.
= Conclusion =
That's it. All possible solutions require development effort that is hardly
possible to make before 10.6 (and even 10.7, considering I would probably
be the one implementing it and I'm not fulltime on the subject and I have
holidays soon).
Writing this email helped me to see the problem with perspective. I think
solution C may be the best. But any opinion is good to hear (except if you
propose something even more complex than I do :p).
Thanks,
Guillaume
Hi devs,
With the initial support of Ludo, Vincent and Anca, the contributions from Clément and the suggestions from Alex #thanks, the PageRelations application has been released [1]. It makes it easy to create relations between pages (by using pages somehow as tags, except that in this way, "tags" themselves can be "tagged"), and to expose clusters / facets of relations that are second-level relations highly connected to first-level relations. There is one remaining issue before releasing a completely functional version of this application, that relates to the refactoring of pages. Simple refactoring is supported via a listener listening to DocumentCreatedEvent. In the general case, by checking if the JobStartedEvent has type "refactoring/rename", the listener updates the inverse relations of a renamed page, by looking into the "entityReferences" property of the request associated with the current job [2]. However this kind of trick does seem not work when the renamed page has children that have relations, and it seems that a real DocumentRenamedEvent with an explicit destination / source would be needed. I understand though that a RenameJob exists, but I could not find any DocumentRenamedEvent. Is this something that you think would be needed in this case, or that is already planned on the roadmap? What do you think?
[1] https://extensions.xwiki.org/xwiki/bin/view/Extension/Page Relations Application/
[2] https://github.com/xwiki-contrib/application-page-relations/blob/master/app…
Thanks for your help and regards,
Stéphane
--
Stéphane Laurière
XWiki www.xwiki.com
@slauriere
Hi devs,
I've made an overview of our icons usage. You can find it at:
https://design.xwiki.org/xwiki/bin/view/Proposal/Polishing10x/PolishingIcon…
>From the icons identified, we have just 33% of them using Icon Themes
variables. Ideally all our icons should use Icon Theme variables. The
problem is that the icon themes are used inline (using the <img> or <span>
elements), and we have lots of places where we are using icons from CSS
(using background-image), so in order to change the usage, it needs
rewriting the functionality.
Anyway, using the overview it's easier now to identify what are the places
that could make an easy switch.
Thanks,
Caty
While working on https://jira.xwiki.org/browse/XWIKI-14037, I noticed
that the Menu Macro has GLOBAL visibility. This implies that Menu
Translations should have GLOBAL visibility as well. But this will cause
issues if you install the Menu Application on a sub-wiki because there is
no fallback yet from GLOBAL visibility to WIKI in case of translations.
In order to avoid this problem, I propose to restrict the installation of
the Menu Application to the main wiki.
I am +1 for this.
Thanks,
Costi
Hi devs,
We’re having a tough time maintaining the quality of XWiki Standard these days (not enough people).
We used to be able to contain bugs and have as many bugs closed than created over 1600 days. We’ve now slipped back to 330+ bugs that have been opened vs created over 1600 days. +116 over 500 days. +96 over 365 days. And +49 over 120 days. BFD days have little impact with that many bugs (since there are about 1-2 devs participating to XS’s BFDs).
In addition I’ve measure our Test Coverage since end of 2017 and we’ve lost coverage compared to before which is really bad (see my other emails). In short this means that we’re hurrying to finish work from the roadmaps without taking enough time to write the proper tests. This will have an important impact in the future if we don’t react.
Thus I’d like to propose that we spend the XS 10.7 roadmap (1 month in August) on bug fixing and writing more tests. It’ll probably not be enough but it’ll help a lot.
Let me know if someone has a counter-proposal or an issue with this proposal.
Thanks
-Vincent
Hi devs,
During XWiki SAS’s hackathon last week, Simon and me worked on implementing test coverage computation for velocity code and more precisely to measure the code coverage we get in XWiki XML pages when running our tests.
The rationale is that we know what’s our java test coverage but we have no clue about the velocity one. And we have a lot of code in velocity scripts in wiki pages. Thus we need a strategy for this too if we wish to increase our global code quality.
So we have currently developed 2 mojos (xar:instrument and xar:reportCoverage) in the XAR plugin code and created a JIRA issue, see XCOMMONS-1448.
Here’s the proposal I’d like your opinion on:
* Finish working on this to stabilize it and commit/push it
* Apply the same strategy we have with Jacoco for java test coverage, i.e. introduce a new xar:coverageCheck mojo that will fail the build if we get a global TPC under the threshold mentioned in the POM
Consequences:
* It will mean that whenever we add new velocity scripts (especially when there are branches such as #if) we will need to improve or add XAR page tests. This can be done in 2 ways:
** by writing/improving a functional UI test
** by writing/improving a XAR unit test
* We will find places that have 0% coverage and these will be good candidates to add tests for
My POV:
* We should have the minimum # of functional UI tests since they take very long to execute. We need them but we shouldn’t test the various branches with them IMO. Only one path.
* Instead we should focus on have more of XAR unit tests since they execute fast and are better suited (with mocks) to test the various branches
* The XAR unit test framework we have is still pretty new and it’s probably not to easy to write unit tests for wiki pages in some cases, we will need to work on that as we discover them. I’m happy to help on that.
WDYT?
Personally I’m ok to try it and see what happens.
Thanks
-Vincent