Because the storage of large attachments is limited by database constraints and the fact
that the
JDBC does not allow us to stream content out of the database, I propose we add a new
database table
binarychunk.
The mapping will read as follows:
<class name="com.xpn.xwiki.store.hibernate.HibernateBinaryStore$BinaryChunk"
table="binarychunk">
<composite-id unsaved-value="undefined">
<key-property name="id" column="id"
type="integer" />
<key-property name="chunkNumber" column="chunknumber"
type="integer" />
</composite-id>
<property name="content" type="binary">
<column name="content" length="983040"
not-null="true"/>
</property>
</class>
Notice the maximum length (983040 bytes) is a number which is divisible by many common
buffer sizes
and is slightly less than the default max_packet_size in mysql which means that using the
xwikibinary table, we could store attachments of arbitrary size without hitting mysql
default limits.
com.xpn.xwiki.store.BinaryStore will contain:
@param toLoad a binary object with an id number set, will be loaded.
void loadObject(BinaryObject toLoad)
@param toStore a binary object, if no id is present then it will be given one upon
successful
store, if id is present then that id number will be used.
void storeObject(BinaryObject toStore)
This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore
com.xpn.xwiki.doc.BinaryObject will contain:
void setContent(InputStream content)
OutputStream setContent()
InputStream getContent()
void getContent(OutputStream writeTo)
Note: The get function and set functions will be duplicated with input or output streams
to maximize
ease of use.
This will be implemented by com.xpn.xwiki.doc.TempFileBinaryObject which will store the
binary
content in a temporary FileItem (see Apache commons fileupload).
+ This will be able to provide a back end for not only attachment content, but for
attachment
archive and document archive if it is so desired.
+ I have no intent of exposing it as public API at the moment.
WDYT?
Caleb