Hi Sorin,
On Jul 3, 2012, at 10:46 AM, Sorin Burjan wrote:
Hello XWiki Community,
I would like to propose a change in the visibility of manual testing.
The Problem:
As you know, we have a manual test report found here:
http://www.xwiki.org/xwiki/bin/view/TestReports/ManualTestReportTemplate
This report is being copied for each major, minor and intermediary
version. Performing a full manual test takes approximately 4-5 days,
on one browser. Since we support 4 browsers and 4 databases it would
take a very large amount of time to test on everything before every
release, since I'm the only one following this manual test report.
This is why most of time I perform partial testing on some browsers.
Please note that ATM, we stage only one day the artefacts before we
release them. Currently our Manual Test Reports is a huge page with a
lot of tables. This makes it a little bit hard to test because for
each test case, I have to manually add Passed/Failed or No Tested. At
the moment there is not enough visibility because we only list the
Browsers we support, but we don't give exact information on which
testcase has been ran on which browser. I'd like to encourage more
users from the community to participate in testing XWiki, and ATM this
might be hard because everything is on a single monolithic page.
Yes the current MTR is not that useful because:
* It says passed but I've realized that it's not because it says passed that
it's been tested
* There's no way to know what tests have been tested on which environment
* The MTR is not updated regularly based on new features/improvements added to XE
* Several tests in the MTR are covered by automated tests and do not need to be manual
* Several domains not covered by automated tests are not covered by manual tests
In order to make it as visible as I can, I have the
following proposal.
The Proposal:
Create a dedicated subwiki for QA (maybe
qa.xwiki.org), which in
future could grow with other QA related things. For the moment, I
would like to break the current manual test report in several pages,
and several spaces.
Each single test would be a wiki page, which will have objects
attached to it. The space names will be the features, like WYSIWYG,
Administration Application, etc.
Also, if this proposal gets voted I'd like in the future to add a new
field so we can mark tests that are already automated. This would
again, greatly ease my work since I know what is already tested
automatically, thus reducing the work needed for manual testing.
After brainstorming this idea with Marius and Silvia, we have agreed
on the following model:
* the MTR will be organized in 2 levels
- spaces that will contain feature/testcase suites with pages that
hold the individual tests.
- pages that will contain individual testcases. Each singular test
will also have one or multiple tags attached with the features it
belongs to
* the following classes to model this:
QA.TestClass attached to each page containing a test
description (TextArea) - Tells what is the test about e.g: "Tests
that images are uploaded fine after a preview."
Note that the test name will be the wiki page name and the steps to
perform the test will be stored in the document content
QA.ExecutionClass (this will contain all information about a test execution)
platform (DBList) - linked to QA.PlatformClass
browser (DBList) - linked to QA.BrowserClass
passed (Boolean) - value to indicate if the test has passed or
failed. Absence of an object for a specific platform/browser means not
tested
jira (String) - link to JIRA issue (if any)
tester (User) - user that performed the test
date (Date) - date when test was executed
QA.PlatformClass (this will contain the version of XE that have been
tested/currently being tested)
name (String) XE
version (String) 4.1.2
QA.BrowserClass (this will contain the browser versions that we support)
name (String) Mozilla Firefox
version (String) 13
In the same manner we can have this for databases in the future.
Final result:
- If this is implemented there will be a page with a (custom)
lievtable that contains all existing tests. There will be filtering
options per feature for example.
- There will be also a tag cloud above the livetable (similar with
what we have on
extensions.xwiki.org)
- Marking a test as executed would require 3 clicks maximum (AJAX)
right from the LiveTable (Similar to Document/Space Index with
Copy/Delete/Rename/Rights), thus improving the speed of manual testing
(the "fill in what you tested" part).
- Going to the page of an individual test will show all XE versions
and browsers on which this has been tested
- Easier for everybody to see what has been tested and what not
In the future, there will also be a proposal/brainstorming about how
to link this with the automated tests. I consider we could be a little
bit more visible about this too. Also, there is no visible link
between the manual tests and the automated tests. It would be nice if
everybody could see for which manual test we have an automated one.
I am waiting for your feedback since I am eager to start working on this.
This is a very good idea.
+1 in general for having a testing application (which we should make as generic as
possible and publish on e.x.o).
+1 too to let contributors create tests and/or test results in that testing application.
Thanks
-Vincent