Because the storage of large attachments is limited by database constraints and the fact that the
JDBC does not allow us to stream content out of the database, I propose we add a new database table
binarychunk.
The mapping will read as follows:
<class name="com.xpn.xwiki.store.hibernate.HibernateBinaryStore$BinaryChunk" table="binarychunk">
<composite-id unsaved-value="undefined">
<key-property name="id" column="id" type="integer" />
<key-property name="chunkNumber" column="chunknumber" type="integer" />
</composite-id>
<property name="content" type="binary">
<column name="content" length="983040" not-null="true"/>
</property>
</class>
Notice the maximum length (983040 bytes) is a number which is divisible by many common buffer sizes
and is slightly less than the default max_packet_size in mysql which means that using the
xwikibinary table, we could store attachments of arbitrary size without hitting mysql default limits.
com.xpn.xwiki.store.BinaryStore will contain:
@param toLoad a binary object with an id number set, will be loaded.
void loadObject(BinaryObject toLoad)
@param toStore a binary object, if no id is present then it will be given one upon successful
store, if id is present then that id number will be used.
void storeObject(BinaryObject toStore)
This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore
com.xpn.xwiki.doc.BinaryObject will contain:
void setContent(InputStream content)
OutputStream setContent()
InputStream getContent()
void getContent(OutputStream writeTo)
Note: The get function and set functions will be duplicated with input or output streams to maximize
ease of use.
This will be implemented by com.xpn.xwiki.doc.TempFileBinaryObject which will store the binary
content in a temporary FileItem (see Apache commons fileupload).
+ This will be able to provide a back end for not only attachment content, but for attachment
archive and document archive if it is so desired.
+ I have no intent of exposing it as public API at the moment.
WDYT?
Caleb
Hi,
In order to implement support for icons/symbols in the rendering (see ), we need to add an API to return an icon URL based on the icon name.
public interface SkinAccessBridge
{
...
/**
* @param iconName the standard name of an icon (it's not the name of the file on the filesystem, it's a generic
* name, for example "success" for a success icon
* @return the URL to the icon resource
* @since 2.6M1
*/
String getIconURL(String iconName);
}
+1 from me.
Thanks
-Vincent
Hi,
We currently have 2 methods introduced in 2.5 timeframe in the WikiModel class:
String getAttachmentURL(ResourceReference attachmentReference);
String getImageURL(ResourceReference attachmentReference, Map<String, String> parameters);
I think we should merge them into a single method in charge of returning the URL of any resource reference:
getResourceURL(ResourceReference reference, Map<String, String> parameters);
Note that this would assume that all references have a URL associated to them. It's not always true (it's true for documents, attachments, url, interwiki and the future icon/symbol but false for path and mailto). We could return null for resource types that have no associated URLs.
The reason I'm proposing this because in order to implement support for symbol/icon I'd need to add a new method to WikiModel: getIconURL(ResourceReference iconReference) but I feel it's better to have a single getResourceURL().
WDYT?
Thanks
-Vincent
Le 20/10/10 12:03, Caleb James DeLisle a écrit :
>
> On 10/20/2010 05:33 AM, Ludovic Dubost wrote:
>> Hi,
>>
>> We do want the availability of file attachment storage (Sergiu has done an implementation during the
>> summer hackathon), but as Guillaume said it should be to the choice of the administrator.
>>
>> Now concerning database storage, about Hibernate, does it means streams are not available at all in
>> Hibernate or does it mean they don't always work ?
>> If streams are available for the databases that support it, which ones support it ?
> They are available, they require use of the blob type so we would have to add a column. I was warned
> about incompatibility issues. I understand that mysql and mssql stream the content back onto the
> heap before saving which ruins any memory savings. Postgres seems to support blobs but I was warned
> about strange issues, specifically this: http://in.relation.to/Bloggers/PostgreSQLAndBLOBs
> I was told that oracle has the best streaming support but I also read that oracle blob support
> requires the use of proprietary api.
>
> This is why I had opted for chinking the data instead of using blobs.
>
Indeed if we are positive that mysql will use the heap to store the full
BLOB then there is no point to this solution since it is our main database.
>> Concerning your proposal, it's interesting as indeed if we use streams for everything else, we do
>> get rid of the memory consuption issue for attachments.
>> Now I have a few concerns:
>>
>> - complexity and management of the data. What happens if we have a corrupted DB and one of the
>> chunks fails to save. We might end up with invalid content.
> I had planned on committing the transaction only after all chunks are saved, if the database has
> memory issues with large commits, another possibility would be to verify after saving and throw an
> exception if that fails.
>
That might indeed help if everything is in one transation except that
MyISAM is not transactional so we can end up with incomplete data.
We do need a way to verify the coherency. We could consider that if the
size is incorrect we don't accept the result.
>> - we also have to solve other large items (like attachment history or recycle bin of attachments)
> This is why I favor a generic BinaryStore rather than a change to XWikiHibernateAttachmentStore.
> Another issue which will have to be addressed is the memory consumption of JRCS for AttachmentArchive.
At the same time we should avoid mixing apples and oranges. We should
not have data with different meanings in different tables.
For Attachment Archive, I'm not against a solution which stops doing
RCS. It has never been efficient anyway.
>> On a side note concerning the max_allowed_packet issue in MySQL, I was able to change that value at
>> runtime (from the mysql console). If this also works using a remote connection, maybe we could hack
>> and force a big value at runtime.
>> This would be really great because the max_allowed_packet is killing us. XWiki does not report it
>> well in many cases and almost no customers reads the documentation and sets the value properly. We
>> also have seen in many cases, where the database is shared with other applications, and there is
>> little access to the database configuration and to the ability of restart. To make it short, the
>> max_allowed_packet issue is a major issue when operating XWiki.
> ``little access to the database configuration'' This may also mean the xwiki user does not have
> permission to change the setting at runtime.
What I meant is not being allowed to restart it.
>> Before we go into large fixes for that problem, could we maybe at least check that we report errors
>> properly (on a 2.0.5 we were not for sure at least for attachment saving failure).
> The fix to http://jira.xwiki.org/jira/browse/XWIKI-5405 has changed attachments so that the content
> and meta data are all saved in a single transaction and http://jira.xwiki.org/jira/browse/XWIKI-5474
> prevents documents from being cached on save so we should have no more attachments which dissapear
> when the cache is purged.
Great. This will at least make the problem show up right away.
Does 5405 protect us from having the attachment in the attachment list
and have no content ?
>> We should also
>> make sure we can always delete even when we cannot read the data in memory. This is also not the
>> case when we cannot read the data because it's too big or because one of the tables does not have
>> any data.
> Sounds like a test ;)
>
You mean a test for you ? a test in the code ? or an XWiki test suite ?
It'a bit of a complex test which requires to screw up attachment data in
all way possible and prove that you can still delete everything that is
left.
Ludovic
> Caleb
>
>> Ludovic
>>
>> Le 18/10/10 19:55, Caleb James DeLisle a écrit :
>>> I talked with the Hibernate people about using streams and was told that it is not supported by all
>>> databases.
>>>
>>> As an alternative to the proposal below I would like to propose a filesystem based storage mechanism.
>>> The main advantage of using the database to store everything is that administrators need only use
>>> mysql_dump and they have their entire wiki backed up.
>>>
>>> If we are to abandon that requirement, we can have much faster attachment storage by using the
>>> filesystem. For this, I propose BinaryStore interface remains the same but
>>> com.xpn.xwiki.doc.BinaryObject would contain:
>>>
>>> void addContent(InputStream content)
>>>
>>> OutputStream addContent()
>>>
>>> void clear()
>>>
>>> InputStream getContent()
>>>
>>> void getContent(OutputStream writeTo)
>>>
>>> clear() would clear the underlying file whereas addContent would always append to it.
>>>
>>>
>>> The added column would look like this:
>>>
>>> <class name="com.xpn.xwiki.store.doc.FilesystemBinaryObject" table="filesystembinaryobject">
>>> <id name="id" column="id">
>>> <generator class="native" />
>>> </id>
>>>
>>> <property name="fileURI" type="string">
>>> <column name="fileuri" length="255" not-null="true"/>
>>> </property>
>>> </class>
>>>
>>>
>>> This would as with the original proposal be useful for not only storing attachments but attachment
>>> history, deleted attachments and even document history or deleted documents.
>>>
>>>
>>> WDYT?
>>>
>>> Caleb
>>>
>>>
>>> On 10/15/2010 04:21 PM, Caleb James DeLisle wrote:
>>>> Because the storage of large attachments is limited by database constraints and the fact that the
>>>> JDBC does not allow us to stream content out of the database, I propose we add a new database table
>>>> binarychunk.
>>>>
>>>> The mapping will read as follows:
>>>>
>>>> <class name="com.xpn.xwiki.store.hibernate.HibernateBinaryStore$BinaryChunk" table="binarychunk">
>>>> <composite-id unsaved-value="undefined">
>>>> <key-property name="id" column="id" type="integer" />
>>>> <key-property name="chunkNumber" column="chunknumber" type="integer" />
>>>> </composite-id>
>>>>
>>>> <property name="content" type="binary">
>>>> <column name="content" length="983040" not-null="true"/>
>>>> </property>
>>>> </class>
>>>>
>>>> Notice the maximum length (983040 bytes) is a number which is divisible by many common buffer sizes
>>>> and is slightly less than the default max_packet_size in mysql which means that using the
>>>> xwikibinary table, we could store attachments of arbitrary size without hitting mysql default
>>>> limits.
>>>>
>>>>
>>>> com.xpn.xwiki.store.BinaryStore will contain:
>>>>
>>>> @param toLoad a binary object with an id number set, will be loaded.
>>>> void loadObject(BinaryObject toLoad)
>>>>
>>>> @param toStore a binary object, if no id is present then it will be given one upon successful
>>>> store, if id is present then that id number will be used.
>>>> void storeObject(BinaryObject toStore)
>>>>
>>>> This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore
>>>>
>>>>
>>>> com.xpn.xwiki.doc.BinaryObject will contain:
>>>>
>>>> void setContent(InputStream content)
>>>>
>>>> OutputStream setContent()
>>>>
>>>> InputStream getContent()
>>>>
>>>> void getContent(OutputStream writeTo)
>>>>
>>>> Note: The get function and set functions will be duplicated with input or output streams to maximize
>>>> ease of use.
>>>>
>>>> This will be implemented by com.xpn.xwiki.doc.TempFileBinaryObject which will store the binary
>>>> content in a temporary FileItem (see Apache commons fileupload).
>>>>
>>>>
>>>>
>>>> + This will be able to provide a back end for not only attachment content, but for attachment
>>>> archive and document archive if it is so desired.
>>>> + I have no intent of exposing it as public API at the moment.
>>>>
>>>>
>>>> WDYT?
>>>>
>>>> Caleb
>>>>
>>>> _______________________________________________
>>>> devs mailing list
>>>> devs(a)xwiki.org
>>>> http://lists.xwiki.org/mailman/listinfo/devs
>>>>
>>> _______________________________________________
>>> devs mailing list
>>> devs(a)xwiki.org
>>> http://lists.xwiki.org/mailman/listinfo/devs
>>>
>>
>
--
Ludovic Dubost
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost
Hi devs,
I'd like to propose 3 things:
1) Add support for symbols/emoticons using our new system in XWiki Syntax 2.1, i.e.
image:symbol:<symbol name>
Another possibility is: image:icon:<icon name>
ex: image:symbol:success (for the success symbol)
2) Use the existing silk icon library and be able to reference all icons using their file names as the symbol name.
For ex, image:symbol:thumb_up
3) Modify the Box macro to accept a ResourceReference as its image parameter so that all valid image resource references can be specified (including the new symbol/icon scheme).
WDYT?
This will solve the emoticon/symbol need + the generic way of adding a box with any symbol.
Open questions:
A) Do we need to handle icon size? If so, how? (answer: with an image parameter)
B) Do we want to expose all silk icons? What if we use another library later on, will we be able to support all existing silk symbols? Thus, do we need to pick a subset and settle on names independent from the underlying library?
For A) IMO we don't need it now and we can always use an image param later on if need be.
For B) I'm tempted to think that a set of well-known symbols independent from the underlying library is better (but more work since we need to define that set and the names).
Thanks
-Vincent
Something we might use?
-------- Original Message --------
Subject: [ANNOUNCE] Apache OpenWebBeans 1.0.0
Date: Tue, 19 Oct 2010 15:51:12 +0200
From: Mark Struberg <struberg(a)apache.org>
To: announce(a)apache.org
The Apache OpenWebBeans Team is proud to announce the final release of
Apache OpenWebBeans 1.0.0
Apache OpenWebBeans is an implementation of the Apache License v2
licensed JSR-299 "Context and Dependency Injection for Java" and JSR-330
"atinject". OpenWebBeans has a modular structure and provides Dependency
Injection scaling from Java SE environments up to EE6 servers with
complicated ClassLoader hierarchies.
1.0.0 implements the latest API, passes the JSR-330 TCK and the JSR-299
standalone TCK.
The release can be downloaded from
http://www.apache.org/dyn/closer.cgi/openwebbeans/1.0.0/http://www.apache.org/dist/openwebbeans/1.0.0/
The Apache OpenWebBeans Team
One that works, perhaps with http://www.forum.nokia.com/Develop/Web/
and allows wiki pages to be viewed and edited without a lot of extra
columns, menus, etc.
Does it make sense for XWiki to automatically determine screen
resolution and apply a "small screen" skin for those browsing from
mobile devices?
Thanks,
Niels
http://nielsmayer.com
Hi!
Please, do we have methods to extract meta data from attached files in
formats like PDF, TIFF, DOC,...? Availability of such a data some times
relies on user input, but other times, like size and channels
information in TIFF files, are built in data that could be/are really
useful for designing scripts to show pictures.
Thanks!
Ricardo
--
Ricardo Rodríguez
CTO
eBioTIC.
Life Sciences, Data Modeling and Information Management Systems
On 10/19/2010 12:15 PM, cjdelisle (SVN) wrote:
> Author: cjdelisle
> Date: 2010-10-19 12:15:41 +0200 (Tue, 19 Oct 2010)
> New Revision: 31959
>
> Modified:
> platform/xwiki-applications/trunk/invitation/src/main/resources/Invitation/WebHome.xml
> Log:
> XAINVITATION-14: Stop using deprecated com.xpn.xwiki.api.Context#getUtil()
>
> Modified: platform/xwiki-applications/trunk/invitation/src/main/resources/Invitation/WebHome.xml
> ===================================================================
> --- platform/xwiki-applications/trunk/invitation/src/main/resources/Invitation/WebHome.xml 2010-10-19 10:03:00 UTC (rev 31958)
> +++ platform/xwiki-applications/trunk/invitation/src/main/resources/Invitation/WebHome.xml 2010-10-19 10:15:41 UTC (rev 31959)
> @@ -965,10 +965,18 @@
> * $invalidAddresses (List<String>) this List will be populated with addresses from $allAddresses which are invalid.
> *###
> #macro(validateAddressFormat, $allAddresses, $emailRegex, $invalidAddresses)
> + ## Perl/javascript regexes look like /^.*/
> + ## java does not like the / at beginning and end.
> + #if($emailRegex.length()> 1)
> + #set($emailRegexInternal = $emailRegex.substring(1, $mathtool.add($emailRegex.length(), -1)))
I'd like to also deprecate the old syntax, so you should have support
for java regexps as well. So, if starts with and ends with /, remove
them from the regexp.
Also, isn't it possible to have some flags after the ending / ? Maybe it
should be something like substringAfterLast('/').
> + #else
> + ## I don't expect this but want to maintain compatability.
> + #set($emailRegexInternal = $emailRegex)
> + #end
> #foreach($address in $allAddresses)
> #if("$!address" == '')
> ## Empty address, do nothing.
> - #elseif(!$xcontext.getUtil().match($emailRegex, $address))
> + #elseif($regextool.find($address, $emailRegexInternal).size() == 0)
> #set($discard = $invalidAddresses.add($address))
> #end
> #end
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Now that InvitationManager is replaced with the Invitation Application and XWorkspaces has been
retired, the only "living" dependency on InvitationManager or SpaceManager is Curriki. I propose we:
1. Move these projects to contrib/retired.
2. Stop building them in Hudson.
WDYT?
Caleb
Hi devs,
I would like to release 2.4.4 before 2.5 final is release to close the
current stable branch before the new one starts as usual.
Here is my +1.
Thanks,
--
Thomas Mortagne
I would like to propose establishing a documented rule for trailing whitespace which follows the
current defacto standard laid out by the IDE's.
Trailing whitespace in java files is unacceptable except in an empty line in a javadoc comment in
which case a single space is required.
/**
* My Cool Method.
* <----- trailing whitespace goes here.
* @param something...
*/
WDYT?
Caleb
As a summary:
I'm +1 with alternate storage mecanisms as long as we can turn them on
optionnally.
The storage systems should be:
* binary chunk
* file system
We also need a migration strategy and also decide which one would be the
default.
In any case short term we should test it well before we move to a new
system being the default.
Le 20/10/10 12:55, Caleb James DeLisle a écrit :
>
> On 10/20/2010 06:28 AM, Ludovic Dubost wrote:
>> Le 20/10/10 12:03, Caleb James DeLisle a écrit :
>>> On 10/20/2010 05:33 AM, Ludovic Dubost wrote:
>>>> Hi,
>>>>
>>>> We do want the availability of file attachment storage (Sergiu has done an implementation during the
>>>> summer hackathon), but as Guillaume said it should be to the choice of the administrator.
>>>>
>>>> Now concerning database storage, about Hibernate, does it means streams are not available at all in
>>>> Hibernate or does it mean they don't always work ?
>>>> If streams are available for the databases that support it, which ones support it ?
>>> They are available, they require use of the blob type so we would have to add a column. I was warned
>>> about incompatibility issues. I understand that mysql and mssql stream the content back onto the
>>> heap before saving which ruins any memory savings. Postgres seems to support blobs but I was warned
>>> about strange issues, specifically this: http://in.relation.to/Bloggers/PostgreSQLAndBLOBs
>>> I was told that oracle has the best streaming support but I also read that oracle blob support
>>> requires the use of proprietary api.
>>>
>>> This is why I had opted for chinking the data instead of using blobs.
>>>
>> Indeed if we are positive that mysql will use the heap to store the full BLOB then there is no point
>> to this solution since it is our main database.
>>>> Concerning your proposal, it's interesting as indeed if we use streams for everything else, we do
>>>> get rid of the memory consuption issue for attachments.
>>>> Now I have a few concerns:
>>>>
>>>> - complexity and management of the data. What happens if we have a corrupted DB and one of the
>>>> chunks fails to save. We might end up with invalid content.
>>> I had planned on committing the transaction only after all chunks are saved, if the database has
>>> memory issues with large commits, another possibility would be to verify after saving and throw an
>>> exception if that fails.
>>>
>> That might indeed help if everything is in one transation except that MyISAM is not transactional so
>> we can end up with incomplete data.
>> We do need a way to verify the coherency. We could consider that if the size is incorrect we don't
>> accept the result.
> It sounds like there might be a need for coherency verification which runs only on mysql with myiasm.
>
>>>> - we also have to solve other large items (like attachment history or recycle bin of attachments)
>>> This is why I favor a generic BinaryStore rather than a change to XWikiHibernateAttachmentStore.
>>> Another issue which will have to be addressed is the memory consumption of JRCS for
>>> AttachmentArchive.
>> At the same time we should avoid mixing apples and oranges. We should not have data with different
>> meanings in different tables.
> Do you mean not have data with different meaning in the same table? If so, I'm not sure I'm sold on
> the idea since it's how XWikiStringProperty works (holds string content for many different types of
> objects). A BinaryChunk table would hold data which would not make sense to query so I think
> anything which needed to store binary content in the database should be able to use the same mechanism.
It's true, so we could go with one big table.
>> For Attachment Archive, I'm not against a solution which stops doing RCS. It has never been
>> efficient anyway.
> +1 Trying to imagine how that would be done.
>
>>>> On a side note concerning the max_allowed_packet issue in MySQL, I was able to change that value at
>>>> runtime (from the mysql console). If this also works using a remote connection, maybe we could hack
>>>> and force a big value at runtime.
>>>> This would be really great because the max_allowed_packet is killing us. XWiki does not report it
>>>> well in many cases and almost no customers reads the documentation and sets the value properly. We
>>>> also have seen in many cases, where the database is shared with other applications, and there is
>>>> little access to the database configuration and to the ability of restart. To make it short, the
>>>> max_allowed_packet issue is a major issue when operating XWiki.
>>> ``little access to the database configuration'' This may also mean the xwiki user does not have
>>> permission to change the setting at runtime.
>> What I meant is not being allowed to restart it.
>>>> Before we go into large fixes for that problem, could we maybe at least check that we report errors
>>>> properly (on a 2.0.5 we were not for sure at least for attachment saving failure).
>>> The fix to http://jira.xwiki.org/jira/browse/XWIKI-5405 has changed attachments so that the content
>>> and meta data are all saved in a single transaction and http://jira.xwiki.org/jira/browse/XWIKI-5474
>>> prevents documents from being cached on save so we should have no more attachments which dissapear
>>> when the cache is purged.
>> Great. This will at least make the problem show up right away.
>> Does 5405 protect us from having the attachment in the attachment list and have no content ?
> If the content fails to save (in a transactional database) the attachment will not save either.
>
>>>> We should also
>>>> make sure we can always delete even when we cannot read the data in memory. This is also not the
>>>> case when we cannot read the data because it's too big or because one of the tables does not have
>>>> any data.
>>> Sounds like a test ;)
>>>
>> You mean a test for you ? a test in the code ? or an XWiki test suite ?
>> It'a bit of a complex test which requires to screw up attachment data in all way possible and prove
>> that you can still delete everything that is left.
> I have thus far been abusing ui-tests for the types of tests which require presence of the database.
> Adding a set of unit style tests which have a database present might be a good idea.
>
> Caleb
>
>
>> Ludovic
>>
>>> Caleb
>>>
>>>> Ludovic
>>>>
>>>> Le 18/10/10 19:55, Caleb James DeLisle a écrit :
>>>>> I talked with the Hibernate people about using streams and was told that it is not supported by all
>>>>> databases.
>>>>>
>>>>> As an alternative to the proposal below I would like to propose a filesystem based storage
>>>>> mechanism.
>>>>> The main advantage of using the database to store everything is that administrators need only use
>>>>> mysql_dump and they have their entire wiki backed up.
>>>>>
>>>>> If we are to abandon that requirement, we can have much faster attachment storage by using the
>>>>> filesystem. For this, I propose BinaryStore interface remains the same but
>>>>> com.xpn.xwiki.doc.BinaryObject would contain:
>>>>>
>>>>> void addContent(InputStream content)
>>>>>
>>>>> OutputStream addContent()
>>>>>
>>>>> void clear()
>>>>>
>>>>> InputStream getContent()
>>>>>
>>>>> void getContent(OutputStream writeTo)
>>>>>
>>>>> clear() would clear the underlying file whereas addContent would always append to it.
>>>>>
>>>>>
>>>>> The added column would look like this:
>>>>>
>>>>> <class name="com.xpn.xwiki.store.doc.FilesystemBinaryObject" table="filesystembinaryobject">
>>>>> <id name="id" column="id">
>>>>> <generator class="native" />
>>>>> </id>
>>>>>
>>>>> <property name="fileURI" type="string">
>>>>> <column name="fileuri" length="255" not-null="true"/>
>>>>> </property>
>>>>> </class>
>>>>>
>>>>>
>>>>> This would as with the original proposal be useful for not only storing attachments but attachment
>>>>> history, deleted attachments and even document history or deleted documents.
>>>>>
>>>>>
>>>>> WDYT?
>>>>>
>>>>> Caleb
>>>>>
>>>>>
>>>>> On 10/15/2010 04:21 PM, Caleb James DeLisle wrote:
>>>>>> Because the storage of large attachments is limited by database constraints and the fact that the
>>>>>> JDBC does not allow us to stream content out of the database, I propose we add a new database
>>>>>> table
>>>>>> binarychunk.
>>>>>>
>>>>>> The mapping will read as follows:
>>>>>>
>>>>>> <class name="com.xpn.xwiki.store.hibernate.HibernateBinaryStore$BinaryChunk" table="binarychunk">
>>>>>> <composite-id unsaved-value="undefined">
>>>>>> <key-property name="id" column="id" type="integer" />
>>>>>> <key-property name="chunkNumber" column="chunknumber" type="integer" />
>>>>>> </composite-id>
>>>>>>
>>>>>> <property name="content" type="binary">
>>>>>> <column name="content" length="983040" not-null="true"/>
>>>>>> </property>
>>>>>> </class>
>>>>>>
>>>>>> Notice the maximum length (983040 bytes) is a number which is divisible by many common buffer
>>>>>> sizes
>>>>>> and is slightly less than the default max_packet_size in mysql which means that using the
>>>>>> xwikibinary table, we could store attachments of arbitrary size without hitting mysql default
>>>>>> limits.
>>>>>>
>>>>>>
>>>>>> com.xpn.xwiki.store.BinaryStore will contain:
>>>>>>
>>>>>> @param toLoad a binary object with an id number set, will be loaded.
>>>>>> void loadObject(BinaryObject toLoad)
>>>>>>
>>>>>> @param toStore a binary object, if no id is present then it will be given one upon successful
>>>>>> store, if id is present then that id number will be used.
>>>>>> void storeObject(BinaryObject toStore)
>>>>>>
>>>>>> This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore
>>>>>>
>>>>>>
>>>>>> com.xpn.xwiki.doc.BinaryObject will contain:
>>>>>>
>>>>>> void setContent(InputStream content)
>>>>>>
>>>>>> OutputStream setContent()
>>>>>>
>>>>>> InputStream getContent()
>>>>>>
>>>>>> void getContent(OutputStream writeTo)
>>>>>>
>>>>>> Note: The get function and set functions will be duplicated with input or output streams to
>>>>>> maximize
>>>>>> ease of use.
>>>>>>
>>>>>> This will be implemented by com.xpn.xwiki.doc.TempFileBinaryObject which will store the binary
>>>>>> content in a temporary FileItem (see Apache commons fileupload).
>>>>>>
>>>>>>
>>>>>>
>>>>>> + This will be able to provide a back end for not only attachment content, but for attachment
>>>>>> archive and document archive if it is so desired.
>>>>>> + I have no intent of exposing it as public API at the moment.
>>>>>>
>>>>>>
>>>>>> WDYT?
>>>>>>
>>>>>> Caleb
>>>>>>
>>>>>> _______________________________________________
>>>>>> devs mailing list
>>>>>> devs(a)xwiki.org
>>>>>> http://lists.xwiki.org/mailman/listinfo/devs
>>>>>>
>>>>> _______________________________________________
>>>>> devs mailing list
>>>>> devs(a)xwiki.org
>>>>> http://lists.xwiki.org/mailman/listinfo/devs
>>>>>
>>
>
--
Ludovic Dubost
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost
In order to decrease the load on the database and heap from large attachments, I would like to
propose a filesystem based storage mechanism. I propose the addition of 2 interfaces for storing
large binary content, a database table to track the files on the filesystem, a new configuration
parameter for the location of persistent filesystem storage, and an implementation of each interface.
The reason for writing an abstract binary storage interface rather than a new implementation of
AttachmentStore, AttachmentVersioningStore, and AttachmentRecycleBinStore is that the code would be
duplicated or should I say triplicated. BinaryStore will provide a means for not only storage of
attachments but storage of other large items which we may decide we want in the future.
I plan to keep the current implementations of AttachmentStore, AttachmentVersioningStore and
AttachmentRecycleBinStore intact so it will be the user's choice how they store attachments.
interface com.xpn.xwiki.store.BinaryStore will contain:
@param toLoad a binary object with an id number set, will be loaded.
void loadObject(BinaryObject toLoad)
@param toStore a binary object, if no id is present then it will be given one upon successful
store, if id is present then that id number will be used.
void storeObject(BinaryObject toStore)
This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore
com.xpn.xwiki.doc.BinaryObject would contain:
void addContent(InputStream content)
OutputStream addContent()
void clear()
InputStream getContent()
void getContent(OutputStream writeTo)
clear() would clear the underlying file whereas addContent would always append to it.
The added column would look like this:
<class name="com.xpn.xwiki.store.doc.FilesystemBinaryObject" table="filesystembinaryobject">
<id name="id" column="id">
<generator class="native" />
</id>
<property name="fileURI" type="string">
<column name="fileuri" length="255" not-null="true"/>
</property>
</class>
WDYT?
Caleb
Hi!
Following http://platform.xwiki.org/xwiki/bin/view/DevGuide/QueryGuide
and a lot of help from people on this list, I got working this query:
#set($query = ", BaseObject as obj, StringProperty as prop where
doc.fullName = obj.name and obj.className='Users.PdrUserClass' and
obj.id=prop.id.id and prop.id.name='Type' and (prop.value like '" +
$usertypeOne + "' or prop.value like '" + $usertypeTwo + "') order by
doc.fullName asc")
I do need to add now another criteria with an *AND* operator querying a
different property (Withdrawal) of the same class (User.PdrUserClass).
I'm miserably failing to construct this query; some of my attempts lead
MySQL to eat 99% of the processor time use.
Please, could you help me with this query? Thanks!
Ricardo
--
Ricardo Rodríguez
CTO
eBioTIC.
Life Sciences, Data Modeling and Information Management Systems
Hi,
From time to time, I see developers sending proposals to this list. Of
course I'm not a developer, but I would like to know what is a best
channel to propose some changes that I find useful for XE/XEM or other
applications.
What is it better and/or advisable: to create a Jira Issue or to send a
message to this or users list? Thanks!
Best,
Ricardo
--
Ricardo Rodríguez
CTO
eBioTIC.
Life Sciences, Data Modeling and Information Management Systems
Hi devs,
When starting the standalone XE jetty+hsqldb distribution, office
importer admin UI is unusable because of a right issue. Now
ExtensionManager admin UI has the same issue.
The real issue is that in admin UI the programming right is checked on
XWiki.XWikiPreference which has XWiki.XWikiGuest as author.
* application that provide their own admin UI should not be executed
on XWiki.XWikiPreference IMO but in the document which contains the
ConfigurableClass object.
* is this really needed that XWiki.XWikiPreference has
XWiki.XWikiGuest as author ?
We really need to find a solution. It's a pain for users for no good
reason at all.
--
Thomas Mortagne
Hi,
I'm getting this error when running the tests for xwiki-observation-remote:
Caused by: java.lang.RuntimeException: the type of the stack (IPv6) and the user supplied addresses (IPv4) don't match: localhost/127.0.0.1.
Use system props java.net.preferIPv4Stack or java.net.preferIPv6Addresses to pick the correct stack
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:108)
at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:54)
at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:453)
at org.jgroups.JChannel.init(JChannel.java:1702)
This thread explains the problem:
http://old.nabble.com/Protocol-stack-issue-on-dual-stack-(IPv4-and-v6)-mach…
This issue has been created to track the problem:
https://jira.jboss.org/browse/JGRP-1152
I've tested with jgroups 2.10.0.Beta2 and it works fine.
I've seen on http://jira.xwiki.org/jira/browse/XWIKI-4917 that jgroups 2.9 and above require JDK 6+.
What do we do?
Should we set the jgroups property to force ipv4 or ipv6?
Thanks
-Vincent
Hi devs,
I'd like to move all XE modules to org.xwiki instead of com.xpn for XE 2.6.
Rationale:
* XE is an open source project and com is commercial
* Our rule was to keep com.xpn for old stuff but this doesn't make sense for XE modules since they're not old stuff
+1 from me
Thanks
-Vincent
I installed XEM and getting the following error.
Any idea why am i getting this? I guess there
are no mysql connectivity problem since the
database tables were created when I tried to
instantiate the XEM.
type Exception report
message
description The server encountered an internal error () that prevented
it from fulfilling this request.
exception
javax.servlet.ServletException: com.xpn.xwiki.XWikiException: Error
number 3 in 0: Could not initialize main XWiki context
Wrapped Exception: Error number 3201 in 3: Exception while saving
document xwiki:XWiki.XWikiPreferences
Wrapped Exception: Failed to commit or rollback transaction. Root cause []
org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:535)
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:433)
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
com.xpn.xwiki.web.ActionFilter.doFilter(ActionFilter.java:129)
com.xpn.xwiki.wysiwyg.server.filter.ConversionFilter.doFilter(ConversionFilter.java:152)
com.xpn.xwiki.plugin.webdav.XWikiDavFilter.doFilter(XWikiDavFilter.java:68)
com.xpn.xwiki.web.SavedRequestRestorerFilter.doFilter(SavedRequestRestorerFilter.java:304)
com.xpn.xwiki.web.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:112)
root cause
com.xpn.xwiki.XWikiException: Error number 3 in 0: Could not
initialize main XWiki context
Wrapped Exception: Error number 3201 in 3: Exception while saving
document xwiki:XWiki.XWikiPreferences
Wrapped Exception: Failed to commit or rollback transaction. Root cause []
com.xpn.xwiki.XWiki.getMainXWiki(XWiki.java:402)
com.xpn.xwiki.XWiki.getXWiki(XWiki.java:471)
com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:136)
com.xpn.xwiki.web.XWikiAction.execute(XWikiAction.java:116)
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
com.xpn.xwiki.web.ActionFilter.doFilter(ActionFilter.java:129)
com.xpn.xwiki.wysiwyg.server.filter.ConversionFilter.doFilter(ConversionFilter.java:152)
com.xpn.xwiki.plugin.webdav.XWikiDavFilter.doFilter(XWikiDavFilter.java:68)
com.xpn.xwiki.web.SavedRequestRestorerFilter.doFilter(SavedRequestRestorerFilter.java:304)
com.xpn.xwiki.web.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:112)
In order to make direct sql queries as similar to hql as possible (and as easy as possible), I
propose that for any new database columns or tables, we begin using column names which match the
name of the java object properties, and table names which match the class name.
I would like to put a note at the top of xwiki.hbm.xml which says:
<!--
All new table names should be all lower case versions of the name of the class which they represent
for example:
XWikiDocument should map to a table called xwikidocument
All new column names should be all lower case versions of the property which they represent for example:
fullName should map to a column called fullname
-->
WDYT?
Caleb
Hin
I'd like to rename {{uservatar}} macro to {{avatar}} (ie deprecate the {{uservatar}} alias) + rename the current "username" parameter to "name":
{{avatar name="XWiki.VincentMassol" /}}
Rationale:
* Macros should be as easy to use as possible and avatar is easier and shorter than useravatar (for several reasons shorter, no need to remember if there's an underscore, a dash, etc)
* We'll want to use that macro not only for user avatars but also for other potential use cases (group avatars for example). Note that the macro can autodiscover if the avatar reference is a user or a group but in case it isn't possible for other types it's always possible to add a type parameter later on.
Here's my +1
Thanks
-Vincent
Hi,
We put unit tests in the same package as the code they're testing. But for functional tests we need to define a best practice. Right now we have a mix of different strategies in enterprise tests for example and I'd like to homogeneize this.
I'm proposing to use: org.xwiki.test.*
For example for clustering tests in distribution-tests/cluster-tests, I'm proposing:
org.xwiki.test.cluster
org.xwiki.test.cluster.framework
Here's my +1
Thanks
-Vincent