Hello Paul,
I have published the API in git hub. Here is the link
I have come up with basic set of API. As of now I'm implementing these and
planning to improvise them as I progress.
Thanking you,
Savitha.
On Tue, Jun 5, 2012 at 11:55 AM, Paul Libbrecht <paul(a)hoplahup.net> wrote:
Savitha,
this was a few days ago.
Have you published?
paul
Le 29 mai 2012 à 23:46, savitha sundaramurthy a écrit :
Hello Paul,
Thanks for bringing this into notice at a very early stage. It
helps me to think in a broad perspective and consider all the aspects.
I'm
going through the XWiki rights model but not sure
if I got the whole
essence of it. I have started to incorporate the changes suggested by you
and Sergiu. The API list is growing big. So, thought would move it into
github and generate javadocs for it as I got the access for it today. Am
also planning to start with the basic implementation of IndexProcess
class
today.
Thanks a lot,
Savitha.
On Sun, May 27, 2012 at 2:23 AM, Paul Libbrecht <paul(a)hoplahup.net>
wrote:
>
> Dear Savitha,
> Dear XWiki community,
>
> that I know of, there's two major flaws in the current lucene plugin:
> - It stores and indexes everything which makes it a big memory eater.
This
> will be fixed by Savitha using Solr's
schema.xml and hopefully other
> admin-configurred classes.
> - Each of the search results' list has to be skimmed through so that the
> count only covers documents one has access to (this is done in
> SearchResults.java in getRelevantResults). This has the direct
consequence
> that a search for all documents basically
goes through all documents
which
> is quite annoying.
>
> In general, the practice of going through many documents, one could say
> the practice of pre-processing the search-results' list, is a
catastrophe.
> There are very many times when a user inputs
a query that matches way
too
> many documents.
> That means also that Savitha should avoid this skimming in her SOLR
module
> and this needs some skills and probably some
help:
>
> - the skills to understand completely the rights model. As far as I know
> it is based on XWikiRights objects in each document and can talk about
> users (a list of users) and groups. but this needs to be deeply observed
> and asked many times about.
>
> - the skills to map this model into something that is executable by
> Solr/Lucene queries. In Curriki or i2geo and in many other specific
> applications, this is much easier because the rights model is simpler
> (owner is defined, only three rights possible). But this has to be done
in
> a generic way and might include the
requirement to reindex a large part
of
> documents if a user joins or leaves a group.
I am thinking this can be
> implemented: include fields such as "prohibitedFor"
"prohibitedForGroup",
> "allowedFor",
"allowedForGroup" and use the current users' identity and
> groups when querying. I note that it is important to care for the user
that
requests
the documents when indexing as well (which probably needs to be
admin).
Savitha, I think this is the hardest part of your project. Are you up to
it?
paul
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs
--
best regards,
Savitha.s
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs