Hi Sebastien,
I am really sorry for the late reply.
Now it remain this problem : it remain that the script used to transfer
data from an internal table (I created it) to the XWiki not continu to
its end. I have about 200000 entries but the program transfers only
52000 entries without any error message (program is finished but data
are not all transfered in the XWiki database).
Sébastien Gaïde a écrit :
XW = xwiki.XWiki;
XWContext = context.context;
XWikiDocument NewEntry = new
XWikiDocument(request.getParameter("web"), SourceValue);
NewEntry.setContent(EntryContent);
XW.saveDocument(NewEntry, XWContext);
- using this program to create
automatically a document work well for
few documents:
could you please send us the real script you are using ? (I mean as
much 'real' as possible).
Here it is :
<form action="$doc.name" enctype="multipart/form-data"
method="get">
Space <input type="text" name="web" value=""
size="20" />
<input type="submit" value="Transfert"
onclick="form.datafile.value =
form.filepath.value"/>
</form>
<%
import com.xpn.xwiki.doc.XWikiDocument
import java.io.*
import java.sql.*
import java.lang.*
if (request.getParameter("web") != null)){
// information access
String user = "xwiki";
String password = "xwiki";
// connection : the table name is eijiro : eword (text type) and jword
(text type) (encoding UTF-8)
String url = "jdbc:mysql://localhost/xwiki" + "?user=" + user +
"&password=" + password + "&autoReconnect=true" +
"&characterEncoding=utf8";
conn = DriverManager.getConnection(url, user, password)
Statement stmt = conn.createStatement()
// get records
SQLstat = "select * from eijiro";
ResultSet st = stmt.executeQuery(SQLstat);
recordnbre = 0;
while (st.next()){
SourceLangue = "en"
SourceValue = st.getString('eword')
ExpressionKataPronunciation = "?"
TargetLangue = "jp"
ExpressionValue = st.getString('jword'); // several translation are
possible!
EntryContent = "* *Headword* ("+SourceLangue+") \n{code} "+
SourceValue
+ "{code}\n * *Translation* ("+TargetLangue+") \n{code} "
// document creation
XW = xwiki.XWiki;
XWContext = context.context;
XWikiDocument NewEntry = new XWikiDocument(request.getParameter("web"),
SourceValue);
NewEntry.setContent(EntryContent);
XW.saveDocument(NewEntry, XWContext);
recordnbre++
}
}
%>
===============================END
LOOP, the
xwiki display error "out of memory!"
have you got a stack trace ?
No, sorry for that.
But now, taking the same program and run it in
the shell (javac,
java) to transfert these 10000 entries to TABLE in the XWiki
database, it's done in less 1 minute?
could you please send us the source of this ?
Here it is :
import java.io.*;
import java.sql.*;
import java.lang.*;
/**
the table name is eijiro : eword (text type) and jword (text type)
*/
public class toDBs{
public static void main(String[] args){
// dictionary data text file [eword,jwords[jw1<\t>jw2..]<\n>]
String file = "./utf8-gdbm.txt";
File LD_File;
FileInputStream Stream_LD=null;
BufferedReader Data_LD = null;
Reader IR_LD = null;
String Line_LD = "";
Connection conn = null;
//System.out.println("before");
// database connect data
String user = "xwiki";
String password = "xwiki";
String url = "jdbc:mysql://localhost/xwiki" + "?user=" + user +
"&password=" + password + "&autoReconnect=true" +
"&characterEncoding=utf8";
try{
LD_File = new File(file);
Stream_LD = new FileInputStream(LD_File);
IR_LD = new InputStreamReader(Stream_LD,"UTF-8");
Data_LD = new BufferedReader(IR_LD);
Class.forName( "com.mysql.jdbc.Driver" ).newInstance();
conn = DriverManager.getConnection(url, user, password);
Statement stmt = conn.createStatement();
String line;
String eword;
String jword;
int cnt = 0;
// read line
System.out.println("before");
float compter = 0;
while( ( line = Data_LD.readLine() ) != null) { //
br.readLine()
// split string is ","
compter++;
String[] split = line.split( "," );
// "'" convert to "\'"
eword = split[0].replaceAll( "'", "\\\\'" );
jword = split[1].replaceAll( "'", "\\\\'" );
// insert to database
String MySQLStat = "insert into eijiro values (
'"+eword+"', '"+jword+"','')";
stmt.executeUpdate( MySQLStat);
//stmt.executeUpdate( "insert into TriaxSEijiro values (
'" + compter + "', '" +eword+ "', '" +
"en" + "','"+ "neant" + "')" );
//stmt.executeUpdate( "insert into TriaxTEijiro values (
'" + compter + "', '" +compter+ "', '"
+"1"+ "', '" +jword+
"','"+"jp"+ "' )" );
}
// close
Data_LD.close();
stmt.close(); conn.close();
}
catch (Exception e){
System.out.println(e);
if (e instanceof SQLException){
System.out.println("SQLException: " + e.getMessage());
System.out.println("SQLState: " +
((SQLException)e).getSQLState());
System.out.println("Code: " +
((SQLException)e).getErrorCode());
}
}
finally{
if (conn != null){
try{
conn.close();
}catch (Exception e){
System.out.println("SQLException: " + e.getMessage());
System.out.println("SQLState: " +
((SQLException)e).getSQLState());
System.out.println("Code: " +
((SQLException)e).getErrorCode());
}
}
}
}
}
==============================================END
I'm generating wikis automatically using XWiki apis, the biggest wiki
created hold 165 000 pages (each page containing between one to 5
objects). the only problem I got was about the archive handling that I
had to disable (which is not a problem since the pages are generated
and not edited), this was with an old XWiki version (svn 1226), since
then the archive system relies on another framework, so this may be
not an issue anymore (we are still using the same XWiki version).
This kind of functionalities interest me, I would like that you lead me
how to do that? Is that transfer done by an external program or an
integrated script in the core of XWiki?
In fact, I tried to transfer 300000 entries in the doc table of XWiki
database (I fill all data and also created and unique ID of each
document). The only problem was with the magic ID of document that XWiki
API create:-)!
When I take randomly any document that I have put in the doc table and
changing its ID by an existing ID (of another document) that have been
generated automaticaly by XWiki (e.g created in the environment), so the
considered document can be detected and managed by XWiki, else no
document of 300000 that exists in the doc table have been detected by XWiki.
I read the code source but I did not understand the method that have
been followed for the generation of the ID?
I'm using only 4 indexes (I will post a reply
tomorrow with the
indexed columns, I don't remember exactly and I can't access this
information right now). These indexes have greatly improved response
time (at first the biggest page (2,5 MB!) took 25 minutes to render
;-) after indexes creation, and some other tweaking, it took only a
few seconds).
indexes will improve response time, but it will not protect you from
an out of memory exception.
Please let me know these indexes...
Many thanks in advance.
Cheers
--------------------------
Youcef Bey
------------------------------------------------------------------------
--
You receive this message as a subscriber of the xwiki-dev(a)objectweb.org mailing list.
To unsubscribe: mailto:xwiki-dev-unsubscribe@objectweb.org
For general help: mailto:sympa@objectweb.org?subject=help
ObjectWeb mailing lists service home page:
http://www.objectweb.org/wws