2010/10/1 Max <publicxwiki(a)itfor.it>it>:
Hi all,
I need to import upgraded XAR archives in xwiki on many Linux servers running an
integrated system we made with liferay, zimbra, alfresco and of course xwiki. Importing
them via web interface it is quite time comsuming and web ports are not always easily
reachable.
I looked for a way to import the XARs from command line but i haven't figured out how
to do that, could you suggest me how to do that?
I think I can't reach my goal with shell scripting, xwiki preferences/import page
scraping and curl posting because authentication is centrally managed by CAS and it needs
client-side javascript to succeed.
I could use a method like the one described at:
http://code.xwiki.org/xwiki/bin/view/Snippets/LargeXARImportScriptSnippet
by defining a local directory on the server where to deploy the XARs and then running a
macro that gets the file list and imports them, but doing all from the shell would be
better.
Any help would be very appreciated, thanks in advance!
Ciao,
Max
_______________________________________________
users mailing list
users(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/users
Hello,
You can used curl.
Here it is script I made to export page with same name as in xar file
#!/bin/bash
XWIKI_BASE_URL=http://localhost:8080
if [[ $# -ne 2 ]]; then
echo usage $0 login password
exit 1
fi
EXPORT_XAR="?format=xar&name=Current"
for page in `jar tvf xwiki-enterprise-wiki-2.2.3.xar | sed -ne
"/\/.*xml/{s/.*\s\(\S\+\)\/\(\S\+\).xml/\1.\2/ p}"`; do
EXPORT_XAR="${EXPORT_XAR}&pages=${page}"
done
curl -b cookies.txt -c cookies.txt
-d"j_username=$1&j_password=$2&j_rememberme=true&submit=Log-in"
${XWIKI_BASE_URL}/xwiki/bin/loginsubmit/XWiki/XWikiLogin
curl -b cookies.txt -c cookies.txt
${XWIKI_BASE_URL}/xwiki/bin/export/Main/WebHome${EXPORT_XAR} -o
Current.xar
Regards,
Arnaud.