(12:44:45 PM) *Vincent Massol:* seems like have some css issue on
myxwiki.org on this page when not logged in:
http://myxwiki.org/xwiki/bin/view/Main/WebHome
(12:44:57 PM) *Vincent Massol:* (the vertical crollbar)
(12:45:01 PM) *Vincent Massol:* scrollbar
(12:45:28 PM) *Vincent Massol:* same thing when logged in
(12:45:37 PM) *Vincent Massol:* that looks new
(12:46:06 PM) *Vincent Massol:* any css/skin guru here who could have a look
(whenever you have time)?
(12:46:21 PM) *Vincent Massol:* it might be a pb in our skin
(12:46:30 PM) *Vincent Massol:* in which case we'd need to fix it
In my own app, this problem appeared when upgrading to XE 2.1's colibri
skin.
I override the behavior in a local CSS file:
#xwikicontent {
overflow: hidden; /* Override new Xwiki 2.1 setting from
/xwiki/bin/skin/skins/colibri/colibri.css which causes "double scrollbars"
*/
}
The problematic statement begins on line 363 of
/xwiki/bin/skin/skins/colibri/colibri.css<http://svn.xwiki.org/svnroot/xwiki/platform/skins/trunk/colibri/src/main/re…>
#xwikicontent {
overflow: auto;
width: 100%;
}
Niels
http://nielsmayer.com
Hi,
Just to let you know I'm breaking an existing behavior which I consider incorrect:
* Right now in XWikiDocument if you call one of the get*Object*() method and you pass a classname without space specified, it'll use the hardcoded "XWiki" space.
I'm changing this to use the current wiki instead, thus following our current strategy.
Please shout if you think this is a problem.
Thanks
-Vincent
http://lucene.apache.org/mahout/ <http://lucene.apache.org/mahout/>Mahout's
goal is to build scalable machine learning libraries. With scalable we mean:
-
Scalable to reasonably large data sets. Our core algorithms for
clustering, classfication and batch based collaborative filtering are
implemented on top of Apache Hadoop using the map/reduce paradigm. However
we do not restrict contributions to Hadoop based implementations:
Contributions that run on a single node or on a non-Hadoop cluster are
welcome as well. The core libraries are highly optimized to allow for good
performance also for non-distributed algorithms.
http://www.manning.com/owen/
Mahout is a machine learning library. The algorithms it implements fall
> under the broad umbrella of “machine
learning,” or “collective intelligence.” This can mean many things, but at
> the moment for Mahout it means primarily
recommender engines, clustering, and classification.
It is scalable. It attempts to provide implementations that use modern
> frameworks for splitting huge
computations efficiently across many machines. Mahout aims to be the machine
> learning tool of choice when the
data to be processed is far too big for a single machine. In its current
> incarnation, these scalable implementations
are written in Java and built upon Apache's Hadoop project.
It is a Java library. It does not provide a user interface, a
> pre-packaged server, or installer. It is a framework of
tools intended to be used and adapted by developers. Mahout can be deployed
> to solve problems if you are
developing modern, intelligent applications or if you are a leading a
> product team or startup that will leverage
machine learning to create a competitive advantage.
If you are a researcher in artificial intelligence, machine learning and
> related areas your biggest obstacle is
probably translating new algorithms into practice. Mahout provides a fertile
> framework for testing and deploying
new large-scale algorithms.
...
some example usage:
...
> Recommender Engines
Recommender engines are perhaps the most immediately recognizable machine
> learning technique in use today.
We've all seen services or sites that attempt to recommend books or movies
> or articles based on our past actions.
They try to infer tastes and preferences and identify unknown items that are
> of interest:
Amazon.com is perhaps the most famous commerce site to deploy
> recommendations. Based on purchases
•
and site activity, Amazon recommends books and other items likely
> to be of interest. See figure 1.1.
Netflix similarly recommends DVDs that may be of interest, and
> famously offered a $1,000,000 prize to
•
researchers that could improve the quality of their
> recommendations.
Social networking sites like Facebook use variants on recommender
> techniques to identify people most
•
likely to be an as-yet-unconnected friend.
....
> Clustering
Clustering turns up in less obvious but equally well-known contexts. As its
> name implies, clustering techniques
attempt to group a large number of things together into clusters that share
> some similarity. It is a way to discover
hierarchy and order in a large or hard-to-understand data set, and in that
> way reveal interesting patterns or make
the data set easier to comprehend.
Google News groups news articles according to their topic using
> clustering techniques in order to present
•
news grouped by logical story, rather than a raw listing of all
> articles. Figure 1.2 below illustrates this.
Search engines like Clusty group search results for similar
> reasons.
•
...
> Classification
Classification techniques decide how much a thing is or isn't part of some
> type or category, or, does or doesn't
have some attribute. Classification is likewise ubiquitous though even more
> behind-the-scenes. Often these
systems “learn” by reviewing many instances of items of the categories in
> question in order to deduce classification
rules. This general idea finds many applications:
Yahoo! Mail decides whether incoming messages are spam, or not,
> based on prior emails and spam
•
reports from users, as well as characteristics of the e-mail
> itself. A few messages classified as spam are
shown in figure 1.3.
Picasa (http://picasa.google.com/) and other photo management
> applications can decide when a region of
•
an image contains a human face.
Optical character recognition software classifies small regions of
> scanned text into individual characters by
•
classifying the small areas as individual characters.
Niels
http://nielsmayer.com
Hello developers,
In the course of rewriting the import UI (XWIKI-4692), I've also
integrated a modified version of Ludovic's patch for XWIKI-831 "Import
wizard should conserve version history".
It's not a huge refactoring, but since I'm deprecating APIs and
introducing new ones, I'd like to be sure it's ok with you before I
commit.
The goal was not to rewrite the whole plugin (which would be needed
thought), but rather to fix major bugs and make possible the "add a
version to existing document" option upon import.
As I wrote on JIRA, here's basically what the patch does:
* It deprecated the notions of backupPack, preserveVersion and
isWithVersions in the installer (or "PackageAPI")
* It introduces a notion of HistoryStrategy (an enum with 3 options: ADD,
REPLACE and RESET) that replace the former preserveVersion and
isWithVersions which meaning was defined nowhere and was not very clear
(preserveVersion came from the package.xml file, and did not mean too much
except "there are history revisions included in this package")
* It introduce a notion of importAsBackup to replace isBackupPack . This
means that its no longer the package.xml file that decides if the package
should be imported as a backup or not, it's up to "the one that import"
(that is, to the importer UI for example; or to a consumer of the
packaging plugin API)
I keep compatibility with the "old" notions of preserveVersion and
isWithVersions. Actually, the compatibility code is a good start to
understand the change :
if (this.historyStrategy == null) {
// Compatibility code to handle consumers of this plugin that do
not use yet the
// versionStrategy (introduced in 2.2M1)
// We set here the versionStrategy based on former parameters
(now deprecated)
if (this.withVersions && this.preserveVersion) {
this.historyStrategy = HistoryStrategy.REPLACE;
}
else if (this.preserveVersion) {
this.historyStrategy = HistoryStrategy.RESET;
}
else {
this.historyStrategy = HistoryStrategy.ADD_VERSION;
}
}
I made the whole patch available on JIRA.
Here is my +1 to commit it.
Please let me know what you think. If you can give me feedback tonight or
tomorrow, that would be great, as I'd like to commit the importer for
2.2M1. Sorry for the short notice, I should have send this a bit earlier.
Thanks,
Jerome.
Hi devs,
I'd like to continue on this work area but I want to be sure we all agree about this. For example this means adding a lot of new APIs in XWikiStoreInterface to return List<DocumentReference> instead of List<String>, etc.
I believe it's fine with everyone but since this is going to cause a lot of changes I thought I should ask to ensure we are all aware of it.
+1 from me.
Thanks
-vincent
PS: I say non velocity API since we haven't decided yet what we do on the velocity side. That'll be another email.
Hi,
This is the current interface:
public interface EntityReferenceFactory<T>
{
/**
* @param entityReferenceRepresentation the representation of an entity reference (eg as a String)
* @param type the type of the Entity (Document, Space, Attachment, Wiki, etc) to extract from the source
* @return the resolved reference as an Object
*/
EntityReference createEntityReference(T entityReferenceRepresentation, EntityType type);
}
Now we have 2 different implementations:
- one for which T = String
- one for which T = EntityReference (our normalizer)
In term of usage this means:
EntityReference ref = factory.createEntityReference("wiki:space.page", EntityType.DOCUMENT);
EntityReference ref = factory.createEntityReference(documentReference, EntityType.DOCUMENT);
The last example is used to normalize the passed reference, convert it into the type specified by the second parameter, filling the blanks.
I feel that Factory is no longer an appropriate name, especially for the second use case. WDYT?
IMO a better name would be Resolver, Normalizer, or Converter. Any other better name? (I haven't put Parser since I don't believe it's correct).
Examples:
EntityReference ref = resolver.resolve("wiki:space.page", EntityType.DOCUMENT);
EntityReference ref = resolver.resolve(documentReference, EntityType.DOCUMENT);
EntityReference ref = normalizer.normalize("wiki:space.page", EntityType.DOCUMENT);
EntityReference ref = normalizer.normalize(documentReference, EntityType.DOCUMENT);
EntityReference ref = converter.convert("wiki:space.page", EntityType.DOCUMENT);
EntityReference ref = converter.convert(documentReference, EntityType.DOCUMENT);
It's quite a lot of work to change what I have put but since this is an important API we need to be sure of what we want since we won't be able to change it later on.
I'm +1 for Resolver.
WDYT?
Thanks
-Vincent
Hi,
Is it possible to write XWiki components that create new pages without
having as a dependency the old xwiki core?
Let's say the component currently has the following code snippet:
XWiki xwiki = xwikiContext.getWiki();
XWikiDocument newDocument = new XWikiDocument();
newDocument.setFullName(documentName, xwikiContext);
newDocument.setContent(templateDoc.getContent());
newDocument.setSyntaxId("xwiki/2.0");
newDocument.setCreator(xwikiContext.getLocalUser());
newDocument.setAuthor(xwikiContext.getLocalUser());
newDocument.setTitle(someString);
xwiki.saveDocument(newDocument, xwikiContext);
How should the code be written so it will be easier to change when the code
refactoring comes to an end?
Thanks,
Anamaria
Hi Everyone,
I would need a time tracking application. The objective is to have an
application to log how much time is spend on project X for person Y.
Does that kind of application already exist, or even a draft that should be
completed ?
Thanks
--
Thibaut Camberlin
Hi,
I"ve just committed a huge change to introduce the notion of Entity References in xwiki-model.
Please all have a look and ensure you like the API in there since it's going to be very hard to change that after 2.2 is released. Basically every single class in XWiki will use xwiki-model in some not too long future so making a bug change there will be very hard in the future.
Also, I'm pretty sure my changes have introduced regression in document name handling. Please help me test the application.
Thanks
-Vincent
Hi devs,
Taking advantage of the less noisy period of the holidays, Flavius,
Marta and myself have developed a new wizard for editing color themes,
much more "WYSIWYG". It still needs work, but the current version is
already very usable, so it should go through an UI review.
To test it, just try to edit any color theme on the incubator:
http://incubator.myxwiki.org/xwiki/bin/view/ColorThemes/
Comments and suggestions welcome.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/