Hi devs,
While debugging the failing Selenium tests, most of which are
flickers, I "discovered" a behaviour that I was not aware of. It looks
like when you ask Selenium to click on an element from the page it
does the following:
* locate the element in the DOM
* compute the bounding rectangle (and thus determine if the element is visible)
* scroll the element into view if needed
* move the mouse in the center of the bounding rectangle
* fire a click event with the mouse coordinates
There are two important things to note:
(1) The click event is not bound to the element. The browser behaves
as if the user clicked at that position (x, y) on the page, on
whatever is displayed on top.
(2) The click command doesn't seem to be atomic (i.e. the element
position can change between the moment its bounding rectangle is
computed and the moment the click event is fired).
This allows for the following to happen:
(i) The floating content/edit menu can silently prevent elements to be
clicked (if the page is scrolled and the element is at the top of the
window and the middle of the element is right beneath the floating
menu).
(ii) Clicking on buttons before the page layout is stable can fail
silently. For instance, clicking on Save & View before the WYSIWYG
edit mode is fully loaded can fail silently because the position of
the button is not stable.
Moreover, it seems that the mouse position is "remembered" between
page loads and the browser reacts to it. For instance, if you click on
"Back to edit" in preview mode (and don't move the mouse) the mouse
can end up above the edit menu thus opening it which in turn prevents
you from clicking on the Link menu from the WYSIWYG editor tool bar
(you end up in Object or Class edit mode..).
I'm not sure if this is something introduced in a recent version of
Selenium or if it was triggered when I enabled native events.
Thanks,
Marius
Hi,
A couple weeks ago I proposed a technique for allowing servlets to be registered with the component
manager. There was some tentative agreement that the idea had merit so I began researching it.
In the process of trying to solve this problem I think I have stumbled on a workable Actions2.0 model.
The really short version:
@Component
@Named("sayhello")
public class HelloWorldAction implements Action
{
@Action.Runner
public void run(ClientPrintWriter cpw)
{
cpw.println("Hello World!");
}
}
The Action router gets a request for localhost:8080/xwiki/sayhello/ and loads your Action from the
ComponentManager, it examines the class and sees your annotated method and looks at it's arguments,
since the method takes a ClientPrintWriter, the action router gets a list of all ActionProviders
from the ComponentManager and examines them to find one which provides it. It then uses the same
technique to recursively resolves that provider's requirements. Once all requirements have been
satisfied, the router calls each provider and finally your Action, each time passing the required
dependencies using the Method#invoke() function.
Here's an example of an ActionProvider which provides a PrintWriter:
@Component
@Named("ServletPrintWriterProvider")
public class ServletPrintWriterProvider implements ActionProvider
{
@Action.Runner
public void run(ServletResponse resp, Callback<ClientPrintWriter> cpwCallback)
{
cpwCallback.call(new PrintWriter(resp.getOutputStream()));
}
}
Obviously one could be registered which worked with Portlet, or even command line invocation.
HelloWorldAction doesn't require much.
Benefits of this design:
1. You only get what you need. Why should I wait for the XWikiContext or the ExecutionContext to be
populated so that I can serve the user a static piece of js or a favicon?
2. Legacy code can coexist with modern code. If there is an ActionProvider which provides an
XWikiContext then all legacy code is about 8 lines away from comparability. Same is true for code
which absolutely needs a ServletRequest and ServletResponse, they just need to require them.
3. This API doesn't try to own you. You're not tied down to a context with specific values in it.
If you are requiring a PotatoeContext and it turns out not to be good enough, write a CarrotContext
and begin requiring that, the Action model is still the same.
Devil's Advocacy:
1. Why does it map implementations by their classes? What if someone wants to register 2 providers
of OutputStream?
* Notice I used ClientPrintWriter rather than PrintWriter in the example. If you write a provider,
it is best practice to extend a class or interface and provide your extension so that anyone who
is using your provider will have to 'import' the class which you provide.
I want to avoid this: Object x = context.get("IHaveNoIdeaWhereThisIsDefined");
We can change this without breaking the API by adding optional "hints" to the @Action.Runner
annotation.
2. Good job cjd, you just rewrote the ComponentManager.
* I spent quite some time wrestling with whether this should be done in the CM, from a pragmatic
PoV, the problem is you end up needing to create things on a per-request basis which abuses the
ECM and will hold the initialization (global) lock for a long time. From a design PoV it's wrong
because dependency injection is meant to inject long lived (often singleton) machinery objects
whereas this is for request scoped data objects.
3. Why the silly Callbacks?
* An obvious thought is that an ActionProvider should just have a get() method like any other
provider which returns the object in question. The pragmatic issue with that is suppose you need
to do something expensive like hit the database to get an object and you get another object for
free while doing it, do you throw the other one away and then when the caller needs the other
one, they call another provider and do the expensive operation over again? This solution allows
a provider to pull in multiple callbacks and call each one, thus providing multiple objects.
A second more subtle reason is that ActionProviders are allowed to return their provisions
asynchronously which means we could implement some very exciting optimizations at the storage
level. We don't have to do that route but I don't want to close the door on it.
4. Nobody has ever done this before
* That's why it's going to work.
Actually this design draws heavily on Asynchronous Module Definition which is explained here:
http://requirejs.org/docs/whyamd.html
5. Magic! Arrest this sorcerer!
* It's valid to call this magic. It's also valid to call any kind of dependency injection magic.
@Inject private InterfaceWhichDoesNotExplainMuch youWillNeverFigureOutWhereTheImplementationIs;
is as bad as this or worse. One way we can minimize the magic and keep the benefits is to make
our ActionProviders provide custom classes which are defined close to the ActionProvider.
ViewingUser or CurrentDocumentAuthorUser are much more self explanatory than injecting "User"
even if both classes extend User.
5. ...
* Help me try to break this design.
Where's the code:
Coming soon, I have most of a PoC hacked together but it currently has a slightly different API
which I scrapped in favor of the one defined here. I just want to get discussion going as early
as possible.
Prove me wrong
Caleb
Hi devs,
I've started an experiment to have colocated functional tests (CFT), which means having the functional tests located where the functional domain sources are located instead of in XE.
For example for the linkchecker module we have the following directories:
xwiki-platform-linkchecker/
|_ xwiki-platform-linkchecker-refresher (JAR)
|_ xwiki-platform-linkchecker-ui (XAR)
|_ xwiki-platform-linkchecker-tests (functional tests)
The rationale for this was:
* Have everything about a functional domain self-contained (source and all tests)
* Making it easy to run only tests for a given functional domain
* Move page objects to the functional domain too
Here are some findings about this experiment:
A - It takes about 30 seconds to generate the adhoc packaging and start XWiki. This would be done for each module having functional tests compared to only once if all tests were executed in XE
B- The package mojo created to generate a full packaging is quite nice and I plan to reuse it in lots of other places in our build (distributions, database, places where we need XWiki configuration files)
C- We will not be able to run platform builds in Maven multithreaded mode since it would mean that several XWiki instance could be started at the same time on the same port
D- The colocated functional test module
Solutions/ideas:
* One idea to overcome A and C would to have the following setup:
** Keep functional test modules colocated but have them generate a test JAR
** Still allow running functional tests from the colocated module (this makes it easy to verify no regression was introduced when making changes to a given domain)
** Have functional tests in XE depend on the colocated functional test module JARs and configure Jenkins to run all functional tests from XE only
* Another solution to overcome C is to auto-discover the port to use in our XWiki startup script (and save it in a file so that the stop script can use it).
I think the first proposal is the best one and brings the best of the 2 worlds.
WDYT?
Thanks
-Vincent
Hi devs,
Following the discussion on http://markmail.org/message/uck6w56gqus2mxsw I
would like to extract the query plugin from xwiki-platform-legacy-oldcore
and move it to retired repository.
The good things is that we will get rid of 3 jars in standard XE by doing
this.
I plan to do it in 4.4M1.
WDYT ?
Here is my +1
--
Thomas Mortagne
Hello,
I'm working on the JIRA issue "Automatically register translations
for the Bulletin Board application", and so I would like to be able to
access the Bulletin Board application. Could you let me access it from my
GitHub account ? My GitHub UserName is "tDelafosse".
Cheers,
Thomas
>
Hi devs,
I had to modify Selenium 1 tests to run on WebDriver because otherwise
WYSIWYG tests were not running on Firefox version newer than 11 and
this was holding ui-tests (Selenium 2 / WebDriver) tests back. Right
now all CI agents are using Firefox 11 which is a bit old. We could
configure CI to run Selenium 1 and 2 tests on different browsers, but
I'd rather use the same browser.
This is the first step of migrating all Selenium 1 tests to WebDriver.
I'd like to merge my commits into the stable-4.3.x branch because it
doesn't affect the stability of the end product (XE) and it eases the
configuration of the CI
See http://jira.xwiki.org/browse/XE-1252 and
https://github.com/xwiki/xwiki-enterprise/commit/811cd70797141d93f062b73907…
Here's my +1
Thanks,
Marius
I want to make such a Comparator so that I can sort objects retrieved by
$xwiki.getObjects("Class")
If I had the Comparator, I'd be able to pass it to a collection sort method
and sort objects.
I'm planning to make that Comparator in a groovy snippet.
I may be able to make a Component for that even.
Is it possible?
If it was possible, would you give me some hints?