Hi devs,
I've started an experiment to have colocated functional tests (CFT), which means having the functional tests located where the functional domain sources are located instead of in XE.
For example for the linkchecker module we have the following directories:
xwiki-platform-linkchecker/
|_ xwiki-platform-linkchecker-refresher (JAR)
|_ xwiki-platform-linkchecker-ui (XAR)
|_ xwiki-platform-linkchecker-tests (functional tests)
The rationale for this was:
* Have everything about a functional domain self-contained (source and all tests)
* Making it easy to run only tests for a given functional domain
* Move page objects to the functional domain too
Here are some findings about this experiment:
A - It takes about 30 seconds to generate the adhoc packaging and start XWiki. This would be done for each module having functional tests compared to only once if all tests were executed in XE
B- The package mojo created to generate a full packaging is quite nice and I plan to reuse it in lots of other places in our build (distributions, database, places where we need XWiki configuration files)
C- We will not be able to run platform builds in Maven multithreaded mode since it would mean that several XWiki instance could be started at the same time on the same port
D- The colocated functional test module
Solutions/ideas:
* One idea to overcome A and C would to have the following setup:
** Keep functional test modules colocated but have them generate a test JAR
** Still allow running functional tests from the colocated module (this makes it easy to verify no regression was introduced when making changes to a given domain)
** Have functional tests in XE depend on the colocated functional test module JARs and configure Jenkins to run all functional tests from XE only
* Another solution to overcome C is to auto-discover the port to use in our XWiki startup script (and save it in a file so that the stop script can use it).
I think the first proposal is the best one and brings the best of the 2 worlds.
WDYT?
Thanks
-Vincent
Le 9 mars 2012 16:59, "Vincent Massol" <vincent(a)massol.net> a écrit :
>
>
> On Mar 2, 2012, at 10:06 AM, Denis Gervalle wrote:
>
> > On Wed, Feb 29, 2012 at 08:19, Vincent Massol <vincent(a)massol.net>
wrote:
> >
> >> Hi,
> >>
> >> On Feb 28, 2012, at 12:17 PM, Thomas Mortagne wrote:
> >>
> >>> Hi devs,
> >>>
> >>> Since I plan to move some stuff from platform to commons I would like
> >>> to know what you think of the history in this case.
> >>>
> >>> Pros including history:
> >>> * can access easily the whole history of a moved file.
> >>
> >
> > This is really an important matter, especially for those joining the
> > project. When you follow XWiki from "outside", and not in a continuous
> > manner, the history is of great value to understand why stuffs are like
> > they are, and what you may do, or not when moving forward.
>
> The history is not lost. If you do a join (all active repos) you still
have it.
I do not know what you means by joining all repos, but I would be surprise
to see the IDE find its way between them. I even wonder how it could be
possible.
>
> >> But sometimes
> >>> changing packages etc make too much difference for git to see it's
> >>> actually the same file so you loose it anyway.
> >>
> >
> > If you simply change the package name, and nothing else, it is really
> > unlikely to happen.
> >
> >
> >>>
> >>> Cons including history:
> >>> * double the history which make tools like ohloh indicate wrong
> >> informations
> >>
> >
> > Sure, the stats will be broken, but what is the matter. This is not
> > cheating, just a misfeature in Ohloh, since the commit are just
identical,
> > something they may notice. IMO, this is the matter of the statistical
tools
> > to improve that.
>
> Can you tell me how to implement this because right now my GitHub tool
doesn't do that and I don't know how to do it?
If I had to implement it, I will probably use some hashing method to be
able to recognize similar commits, since there effectively no link between
them. But my main remarks that the statistics are broken, not the way we
use git.
>
> >>> * it's a lot easier to move without history
> >>
> >
> > There should be some tools to improve that point or we may write one,
once
> > for all. So this is not a real cons either.
>
> It's really hard to copy history in Git. It's almost impossible to do it
right. You have to remember the full history and it's just too hard.
I would be really disappointed to have to conclude that. There is probably
some edge cases, but most of the time there is clever work around. You have
to talk to Sergiu :-)
>
> >>> WDYT ?
> >>>
> >>> Even if it was looking a bit weird to me at first I'm actually +1 to
> >>> not move the history in this case.
> >>
> >> +1, FTR I'd be -0, close to -1 to move it. If/when the source
repository
> >> is removed for one reason or another, then we might want to import its
> >> history somewhere.
> >>
> >
> > Seems we are really opposite on this one, since I am close to -1 to not
> > move it.
>
> Sorry but that's the current practice :) It's also the easiest one.
Until we have Git, there were no better way. This does not means that we
should not improve our practice. By the way, it was not my thread, if
Thomas has asked, it means that the current practice was not so current.
>
> > Statistics is really less valuable IMO, it is a small interest compare
to
> > code history, that I have use a lot, especially when I have join the
> > project and follow sparingly.
>
> I can say exactly the same thing as you said above. It's just a question
of tools since the history is not lost. It's still there in our active
repos.
There is absolutely no link between these histories. It is not only a
question of tools. Moreover, requiring querying all active repositories to
have a proper history completely defeat the purpose of having separate
repositories.
I do not see the comparison with my remark above. Git has been made for
versionning, not for statistics, it is not my fault.
>
> > So the general rule for me is: Copy history when the source repository
is
> >> removed/deleted/not used anymore.
>
> How many times have you done this? I believe 0 times since I don't think
you'd be so much in favor if you had tried it. I suggest you try it a few
times on your own projects first :) It's really hard to do it right and
very time consuming.
When I have copied the security component from contrib, I have done so. I
hope that I am not alone. And, frankly, it was not so hard, compare to the
advantage you have.
>
> > You never know what will happen to a repository in the future, so this
> > rules is somewhat a hope on the future, no more. And remembering that we
> > may loose history if we do some change in the old repository, is for me
> > like hoping you will remember my birthday ;)
>
> I don't agree with this at all. Again we're not loosing history. If a
repo is removed then its history is copied I agree about that.
I would like to know how you do that after the facts?
>
> >>> Eduard was proposing to include in the first commit of the new
> >>> repository the id of the last commit containing the files (basically
> >>> the id of the parent of the commit deleting the files) in the old
> >>> repository so that it's easier to find it. I'm +1 for this.
> >>
> >
> > But you loose all the benefits of the IDE tools that brings history of a
> > selection automatically and that are really useful.
>
> A huge majority of xwiki's history is already lost to IDEs (when we moved
from SVN) even though the SVN history was moved. Even Git itself doesn't
follow the history when you move stuff around. Said differently it's alwasy
possible to find the history but the IDE and "standard" tool don't follow
it.
It does so far better since we move to Git and it is really a valuable
tool. Do you means that because in a few case, the history may be broken,
that we should not try to have it as complete as possible?
>
> > Moreover, if the history is rewritten due to a change in structure
later,
> > the hash may be broken.
>
> Not sure I understand this one.
In Git, nothing is fully permanent, that is all I say.
>
> You should really measure the cost of what you propose Denis. It's really
hard to do.
Prove me that is more cost than the one newcommers has to enter the
project. Maybe you do not value history so much because you have by your
own experience of the project a good knowledge of what happen in the past.
When I dig in some code, I always found history valuable to understand why
that piece of code is not written the way I may have expected and why I
should not got that way.
If Thomas conclude it is too hard to be done, and not just some developer's
lazyness, I would understand; but I do not agree that it should not be done
just because it breaks statistics or we think it is too hard. This is why I
suggest a tools that do it once for all. I would be really disappointed of
Git if we had to conclude this.
Thanks,
Denis
>
> Thanks
> -Vincent
>
> > So having a broken history is hardening the task of those who want to
> > participate. A great value compare to the statistics IMO.
> >
> > --
> > Denis Gervalle
> > SOFTEC sa - CEO
> > eGuilde sarl - CTO
> > _______________________________________________
> > devs mailing list
> > devs(a)xwiki.org
> > http://lists.xwiki.org/mailman/listinfo/devs
>
> _______________________________________________
> devs mailing list
> devs(a)xwiki.org
> http://lists.xwiki.org/mailman/listinfo/devs
Hello all,
I would like to use XWiki WebDAV for editing MSOffice documents online
To do this in read-write mode, we must configure XWiki WebDav on ROOT.
Instead of opening the document from:
http://myserver.test/xwiki/webdav/spaces/test/msoffice/Bonjour.docx
we must open it from:
http://myserver.test/spaces/test/msoffice/Bonjour.docx
This is what I did:
- I am running Tomcat 7 / XWiki Enterprise V2.6
- I renamed the ROOT context xwiki
- I changed the servlet-mapping like this
<servlet-mapping>
<servlet-name>webdav</ servlet-name>
<url-pattern>/*</ url-pattern>
</ servlet-mapping>
Now I can't even request the document
http://myserver.test/spaces/test/msoffice/Bonjour.docx
I have response:
HTTP Status 400 - Bad Request
Is there any solution XWiki WebDAV to open MSOffice in read-write mode ?
Thanks in advance
Could be useful:
http://ocpsoft.com/prettytime/
Idea of usage: For ex we could use that to show the last modified
document dates for dates in the past week (for ex):
"Document created 2 days ago"
It's in the maven central repo and it's under LGPL
-Vincent
Is there a version of the Groovy Console (
http://extensions.xwiki.org/xwiki/bin/view/Extension/Groovy+Console+Applica…
) that is compatible with XWiki 3.5?
When I try to run it, I see missing icons ( referencing old skins,
which I fixed), but now get the following JavaScript errors after
attempting to execute my Groovy code:
Uncaught ReferenceError: CodeMirror is not defined
var editor = CodeMirror.fromTextArea('script', {
And "Uncaught TypeError: Cannot call method 'getCode' of undefined"
var ajx = new Ajax.Request("$scriptDoc.getURL('save')" , {
parameters:
{"XWiki.ConsoleScriptClass_${scriptObj.number}_code":editor.getCode(),
"ajax":"1"},
onComplete:function(transport){
$('saveLoading').addClassName("hidden");
$('saveStatus').removeClassName("hidden");
$('saveStatus').innerHTML = "Saved !";
var foo = new Effect.Highlight('saveStatus');
setTimeout(function(){
$('saveStatus').innerHTML = "";
$('saveStatus').addClassName("hidden");
}, 5000);
}
I guess I could keep plugging away and trying to fix this, but was
wondering whether there's a newer version of the Groovy Console
compatible w/ XWiki 3.5 and beyond, or is there some simple fix to the
problems outlined above?
Seems like this has been noted as a problem previously:
http://lists.xwiki.org/pipermail/users/2011-October/020745.html
I attempted installing 'SyntaxHighlighting' (says its "CodeMirror
based syntax highlighting ...") and noticed it now adds new error at
startup in the GroovyConsole:
Uncaught TypeError: Object function
a(f,g){if(g.dumbTabs){g.tabMode="spaces"}else{if(g.normalTab)
Suggesting problems w/ this extension as well. I do note that my
existing scripts now have pretty syntax highlighting... so it is
working. Unfortunately, it also forces the wiki-code editor to appear
below the wysiwyg and makes the wysiwyg area too small. So I
uninstalled it and my Wysiwyg now works as before....
Any way of getting Groovy Console functionality working? Seems like a
great way to work with cloud-based XWiki installs, such as mine...
Thanks,
-- Niels
http://www.nielsmayer.com
Hello developers,
I seem to see a regression whereby a POST to an /xwiki/bin/upload/ request no more uses the xredirect parameter and simply redirects to the page.
Is this a known change?
Is it supposed to have been working being undocumented and can try to find a fix?
Or should I simply try to adjust the client (tremble tremble).
thanks in advance
Paul
Hi,
I'd like to add staging to our official release process.
For milestone releases, I propose the staging cycle be for "0 time" (this may be revisited later).
For RC or finals, we place the release in staging and immediately call a VOTE to publish the release, this gives our testing team (everybody!) 72 hours to raise a potential issue.
Why:
#1. After some chat on IRC I decided that it is advantageous to move toward a faster release cycle and begin moving away from milestone releases in favor of staging. This will set the stage for the release method we will need.
#2. Staging is easy, I've modified the release script to include staging and with the script, it is a simple matter of about 5 clicks on nexus to "login", "close repository", "release repository".
#3. Staging is safe, the RM need not worry about fat fingers breaking the release, all it costs is time.
#4. The release process should be as close to the same as possible for milestone and RC/final releases. This simplifies scripting of the process, decreases the amount the RM must remember and makes every milestone release a rehearsal.
#5. Everybody else is doing it (is that even a reason?!)
Mandatory?
I would rather impress the RM with how easy and helpful staging can be than bind him with rules.
If I had followed the existing process to the letter, I would not have had any experience with staging to begin with.
In the interest of continuous improvement I would like to make this a strong recommendation, not a strict rule.
Here's my +1
Caleb
In order to try meet a release date, I would like to use the process as documented here:
http://dev.xwiki.org/xwiki/bin/view/Community/ReleasePlans#H4.1RC1
Synopsis:
June 4th:
* send 2 day warning mail, ask if anyone needs to block the release.
* branch stable-4.1.x to allow 4.2 development to continue.
June 6th:
* build staged release.
* send out call for testing.
* do release on jira with release post-dated to the 11th.
June 11th:
* release from staging.
* publish release.
I know this is a tight schedule but realistically we need 2 days for warning/stabilization
and 2 (working) days for testing.
WDYT?
Caleb
Hi Jerome & Community,
Here is the design page for the Responsive Skin [1].
* I'd like we start with a phase of "paper" design (I mean with gimp
> or photoshop or whatever tool to produce images).
This is available in the design page, or, alternatively phone [2], tablet
[3], desktop [4]
* I think we should limit the feature set of the skin ; not trying to
> do everything right away (there are potentially a lot of features to
> work on, from livetables to data editors, to applications, etc.)
* For a start, focus should be given on content and navigation. With a
> mobile-first approach, expanding up to large-screens desktop.
* I think it's OK to have semantic break points (like "phone",
> "tablet", etc.) as long as the skin is actually responsive and adapt
> to whatever real estate is available. We should be able to "drag the
> corner" of a browser window and have the skin display well at all
> sizes.
Agreed.
Also, current questions (c&p) from other e-mail regarding gsoc:
1. Specific support for non-javascript capable browser? I feel like it is
not necessarily since browser which can not support javascript will fall
back by itself. More over, it would not be capable of carrying out
media-queries required for responsive design and some XWIKI features (such
as live tables?) anyway.
2. Is the community ok with trying to use "true (html)" drop downs / forms
in order to fully utilize functions built in to phone/tablets [5]?
3. Pressable Links: should they be bigger on mobile to help facilitate
touching on words, or would it be better to use a "background" to create a
"touch area"? Both are in the phone mock up [6]. Former demonstrated in
quicklinks, latter in the "Spaces" section.
[1] http://dev.xwiki.org/xwiki/bin/view/Design/ResponsiveSkin
[2] http://jssolichin.com/public/mobile.jpg
[3] http://jssolichin.com/public/tablet.jpg
[4] http://jssolichin.com/public/desktop.jpg
[5] http://css-tricks.com/convert-menu-to-dropdown/
[6] http://jssolichin.com/public/mobile.jpg