CrashPlan packages for Synology NAS

UPDATE – The instructions and notes on this page apply to all three versions of the package hosted on my repo: CrashPlan, CrashPlan PRO, and CrashPlan PROe.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient – it could even get stolen or destroyed and you would still have your data. Whilst you can pay a small monthly charge for a storage allocation in the Cloud, one neat feature CrashPlan offers is for individuals to collaboratively backup their important data to each other – for free! You could install CrashPlan on your laptop and have it continuously protecting your documents to your NAS, even whilst away from home.

CrashPlan-Windows

CrashPlan is a Java application, and one that’s typically difficult to install on a NAS – therefore an obvious candidate for me to simplify into a package, given that I’ve made a few others. I tried and failed a few months ago, getting stuck at compiling the Jtux library for ARM CPUs (the Oracle Java for Embedded doesn’t come with any headers).

I noticed a few CrashPlan setup guides linking to my Java package, and decided to try again based on these: Kenneth Larsen’s blog post, the Vincesoft blog article for installing on ARM processor Iomega NAS units, and this handy PDF document which is a digest of all of them, complete with download links for the additional compiled ARM libraries. I used the PowerPC binaries Christophe had compiled on his chreggy.fr blog, so thanks go to him. I wanted make sure the package didn’t require the NAS to be bootstrapped, so I picked out the few generic binaries that were needed (bash, nice and cpio) directly from the Optware repo.

UPDATE – For version 3.2 I also had to identify and then figure out how to compile Tim Macinta’s fast MD5 library, to fix the supplied libmd5.so on ARM systems (CrashPlan only distributes libraries for x86). I’m documenting that process here in case more libs are required in future versions. I identified it from the error message in log/engine_error.log and by running objdump -x libmd5.so. I could see that the same Java_com_twmacinta_util_MD5_Transform_1native function mentioned in the error was present in the x86 lib but not in my compiled libmd5.so from W3C Libwww. I took the headers from an install of OpenJDK on a regular Ubuntu desktop. I then used the Linux x86 source from the download bundle on Tim’s website – the closest match – and compiled it directly on the syno using the command line from a comment in another version of that source:
gcc -O3 -shared -I/tmp/jdk_headers/include /tmp/fast-md5/src/lib/arch/linux_x86/MD5.c -o libmd5.so

Aside from the challenges of getting the library dependencies fixed for ARM and QorIQ PowerPC systems, there was also the matter of compliance – Code 42 Software’s EULA prohibits redistribution of their work. I had to make the syno package download CrashPlan for Linux (after the end user agrees their EULA), then I had to write my own script to extract this archive and mimic their installer, since their installer is interactive. It took a lot of slow testing, but I managed it!

CPPROe package info

My most recent package version introduces handling of the automatic updates which Code 42 sometimes publish to the clients. This has proved to be quite a challenge to get working as testing was very laborious. I can confirm that it worked with the update from CrashPlan PRO 3.2 to 3.2.1 , and from CrashPlan 3.2.1 to 3.4.1:

CrashPlan-update-repair

 

Installation

  • This package is for Marvell Kirkwood, Marvell Armada 370/XP, Intel and Freescale QorIQ/PowerQUICC PowerPC CPUs only, so please check which CPU your NAS has. It will work on an unmodified NAS, no hacking or bootstrapping required. It will only work on older PowerQUICC PowerPC models that are running DSM 5.0. It is technically possible to run CrashPlan on older DSM versions, but it requires chroot-ing to a Debian install. Christophe from chreggy.fr has recently released packages to automate this.
  • In the User Control Panel in DSM, enable the User Homes service.
  • Install the package directly from Package Center in DSM. In Settings -> Package Sources add my package repository URL which is http://packages.pcloadletter.co.uk.
  • You will need to install either one of my Java SE Embedded packages first (Java 6 or 7). Read the instructions on that page carefully too.
  • If you previously installed CrashPlan manually using the Synology Wiki, you can find uninstall instructions here.
 

Notes

  • The package downloads the CrashPlan installer directly from Code 42 Software, following acceptance of their EULA. I am complying with their wish that no one redistributes it.
  • CrashPlan is installed in headless mode – backup engine only. This is configured by a desktop client, but operates independently of it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up very large backup sets by editing /volume1/@appstore/CrashPlan/syno_package.vars. If you’re considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the Intel models which come with 1GB RAM and can be upgraded to 3GB very cheaply. RAM is very limited on the ARM ones. 128MB RAM on the J series means CrashPlan is running with only one fifth of the recommended heap size, so I doubt it’s viable for backing up very much at all. My DS111 has 256MB of RAM and currently backs up around 60GB with no issues. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB.
  • As with my other syno packages, the daemon user account password is randomized when it is created using the openssl binary. DSM Package Center runs as the root user so my script starts the package using an su command. This means that you can change the password yourself and CrashPlan will still work.
  • The default location for saving friends’ backups is set to /volume1/crashplan/backupArchives (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • The first time you run the server you will need to stop it and restart it before you can connect the client. This is because a config file that’s only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
  • Once the engine is running, you can manage it by installing CrashPlan on another computer, and editing the file conf/ui.properties on that computer so that this line:
    #serviceHost=127.0.0.1
    is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
    serviceHost=192.168.1.210
    On Windows you can also disable the CrashPlan service if you will only use the client.
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you’re trying to troubleshoot an issue you will need to use an SSH session to inspect the two engine log files which are:
    /volume1/@appstore/CrashPlan/log/engine_output.log
    /volume1/@appstore/CrashPlan/log/engine_error.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). You will need to wait several minutes in the event of this happening before you take action, because the update script tries to restart CrashPlan 10 times at 10 second intervals. After this, you simply start the package again in Package Center and my scripts will fix the update, then run it. One final package restart is required before you can connect with the CrashPlan Desktop client (remember to update that too).
  • After their backup is seeded some users may wish to schedule the CrashPlan engine using cron so that it only runs at certain times. This is particularly useful on ARM systems because CrashPlan currently prevents hibernation while it is running (unresolved issue, reported to Code 42). To schedule, edit /etc/crontab and add the following entries for starting and stopping CrashPlan:
    55 2 * * * root /var/packages/CrashPlan/scripts/start-stop-status start
    0  4 * * * root /var/packages/CrashPlan/scripts/start-stop-status stop

    This example would configure CrashPlan to run daily between 02:55 and 04:00am. CrashPlan by default will scan the whole backup selection for changes at 3:00am so this is ideal. The simplest way to edit crontab if you’re not really confident with Linux is to install Merty’s Config File Editor package, which requires the official Synology Perl package to be installed too (since DSM 4.2). After editing crontab you will need to restart the cron daemon for the changes to take effect:
    /usr/syno/etc.defaults/rc.d/S04crond.sh stop
    /usr/syno/etc.defaults/rc.d/S04crond.sh start

    It is vitally important that you do not improvise your own startup commands or use a different account because this will most likely break the permissions on the config files, causing additional problems. The package scripts are designed to be run as root, and they will in turn invoke the CrashPlan engine using its own dedicated user account.
  • If you update DSM later, you will need to re-install the Java package or else UTF-8 and locale support will be broken by the update.
  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, I would really appreciate it if you could use this affiliate link, or consider donating using the PayPal button on the right.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology Package wiki.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_PATH="http://download.crashplan.com/installs/linux/install/${SYNOPKG_PKGNAME}"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_3.6.4_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_3.6.4_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER="3.6.1.4"
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
  DOWNLOAD_FILE="CrashPlanPROe_${CPPROE_VER}_Linux.tgz"
fi
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armle"
[ "${SYNO_CPU_ARCH}" == "armv7l" ] && SYNO_CPU_ARCH="armle"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
cat /proc/cpuinfo | grep "Comcerto 2000" > /dev/null && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tar.xz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
source /etc/profile


preinst ()
{
  if [ -z ${JAVA_HOME} ]; then
    echo "Java is not installed or not properly configured. JAVA_HOME is not defined. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi

  if [ ! -f ${JAVA_HOME}/bin/java ]; then
    echo "Java is not installed or not properly configured. The Java binary could not be located. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi

  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, "
        echo "which was \"${WGET_URL}\" "
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. "
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${EXTRACTED_FOLDER}
  
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  exit 0
}


preuninst ()
{
  #make sure engine is stopped
  /var/packages/${SYNOPKG_PKGNAME}/scripts/start-stop-status stop
  
  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi

  if [ "${LIBFFI_SYMLINK}" == "YES" ]; then
    sed -i "/^LIBFFI_SYMLINK/d" ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  [ -e ${OPTDIR}/lib/libffi.so.* ] && rm ${OPTDIR}/lib/libffi.so.*
  
  #delete symlinks that no longer resolve
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi
  if [ ! -e /lib/libffi.so.6 ]; then
    [ -L /lib/libffi.so.6 ] && rm /lib/libffi.so.6
    #repair libffi.so.6 symlink on DSM 5.0 now that it's included by default
    ln -s libffi.so.6.0.1 /lib/libffi.so.6
  fi
    
  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

  #remove legacy daemon user and homedir
  DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
  synouser --del ${DAEMON_USER}
  [ -e /var/services/homes/${DAEMON_USER} ] && rm -r /var/services/homes/${DAEMON_USER}
  
 exit 0
}


preupgrade ()
{
  #make sure engine is stopped
  /var/packages/${SYNOPKG_PKGNAME}/scripts/start-stop-status stop
  CONFIG_MIGRATION=

  #if identity from legacy package version exists migrate it
  DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
  DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
  if [ -d ${DAEMON_HOME}/.crashplan ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    mkdir -p /var/lib/crashplan
    mv ${DAEMON_HOME}/.crashplan/.identity /var/lib/crashplan/
    chown -R root:root /var/lib/crashplan/
    CONFIG_MIGRATION="true"
  fi

  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    CONFIG_MIGRATION="true"
  fi

  #if config data exists back it up  
  if [ "${CONFIG_MIGRATION}" == "true" ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}

    #fix permissions on migrated settings and manifest from legacy package versions which used a daemon user account
    chown -R root:root ${SYNOPKG_PKGDEST}
    if [ -f ${SYNOPKG_PKGDEST}/conf/my.service.xml ]; then
      MANIFEST_FOLDER=`cat ${SYNOPKG_PKGDEST}/conf/my.service.xml | grep "<manifestPath>" | cut -f2 -d'>' | cut -f1 -d'<'`
      chown -R root:root ${MANIFEST_FOLDER}
    fi    

    #tidy up some legacy settings
    sed -i "/^CRON_SYNOPKG_PKGNAME/d" ${SYNOPKG_PKGDEST}/syno_package.vars
    sed -i "/^CRON_SYNOPKG_PKGDEST/d" ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk


TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
ENGINE_CFG="run.conf"
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
OPTDIR="${PKG_FOLDER}/target"
PID_FILE="${OPTDIR}/${DNAME}.pid"
DLOG="${OPTDIR}/log/history.log.0"
CFG_PARAM="SRV_JAVA_OPTS"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
FULL_CP="${OPTDIR}/lib/com.backup42.desktop.jar:${OPTDIR}/lang"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile


start_daemon ()
{
  #check persistent variables from syno_package.vars
  USR_MAX_HEAP=0
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  fi
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/
  fi

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to start-stop-status.sh from installer.sh because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
    fi
  done

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/restartLinux.sh

  #any downloaded upgrade script will usually have failed until the above changes are made so we need to
  #find it and start it, if it exists
  UPGRADE_SCRIPT=`find ${OPTDIR}/upgrade -name "upgrade.sh"`
  if [ -n "${UPGRADE_SCRIPT}" ]; then
    rm ${OPTDIR}/*.pid
    SCRIPT_HOME=`dirname $UPGRADE_SCRIPT`
 
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology repairing upgrade in ${SCRIPT_HOME}" >> ${DLOG}

    mv ${SCRIPT_HOME}/upgrade.log ${SCRIPT_HOME}/upgrade.log.old
    cd ${SCRIPT_HOME}
    source upgrade.sh
    mv ${SCRIPT_HOME}/upgrade.sh ${SCRIPT_HOME}/upgrade.sh.old
    exit 0
  fi

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/libjtux.so ] && cp -f ${OPTDIR}/bin/libjtux.so ${OPTDIR}/
  cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80
  elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192
  elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384
  elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512
  elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024
  fi
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
    JAVA_MAX_HEAP=${USR_MAX_HEAP}
  fi   
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
    $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
  fi
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  if [ "${SYNO_CPU_ARCH}" != "x86_64" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
  fi

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
     
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars
    fi

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"
  else
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
  fi

  #increase the system-wide maximum number of open files from Synology default of 24466
  echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
  ulimit -n 65536

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $!
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    fi
  else
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1
  fi
}

stop_daemon ()
{
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/
}

daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}

wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

Changelog:

  • 0028 Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required
    libjnidispatch.so taken from Debian JNA 3.2.7 package with dependency on newer libffi.so.6 (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox’s renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
  • 0027 Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 Updated CrashPlan to version 3.5.2
  • 0020 Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 intial public release
 
 
About these ads

2,866 thoughts on “CrashPlan packages for Synology NAS

  1. frillen

    Thank you seems to work perfect.

    I already have 400gb uploaded to my crashplan account. All 400gb is located on my DS209 – but I used crashplan on Mac Pro to upload.

    In this way I had to

    1. have one computer open to upload files
    2. worry about my wifi connection on my mac book. I dont have the best connection all over the house, so when Im in the kitchen I did upload with 400kbs near my router 850kbs.

    I say thanks.

    I just did adopt my old profile. It seems to work.

    I’ll let you know!

    Reply
    1. patters Post author

      Great, thanks for the feedback and glad it’s all worked out. I was going nuts yesterday trying to debug my scripts! Just couldn’t accept defeat :)

      Reply
      1. frillen

        The adopting process seems to work great.

        Here you have a screendump of the adopt process – seems to take a day or so but at the moment it work on a fair speed – even then I only have a 1mb upload connection.

        The missing folders is the “old” folders. The one a the top is the new ones.

        http://tinypic.com/r/142v68z/5

        I report back when it is done….

      2. patters Post author

        I guess it must just be re-checksumming all the files, and then maybe re-encrypting them at the CrashPlan Central end with the new key the NAS is using. How’s the NAS CPU use?

  2. Adam

    Thanks for the package, loaded up nice and easy and your steps to connect via the desktop client worked perfectly

    Hoping you may be able to help with something, I have the package running obviously and everything is peachy there and my NAS is backing up to the Crashplan cloud

    What I have then attempted to do is run crashplan on my desktop to backup files to my NAS ie the backup process will work “folder A on desktop”>NAS>Cloud. The desktop client sees the NAS as an available “computer” and I can start the backup but then the client and the NAS client report the following error:

    “Destination unavailable – backup location is not accessible”

    I have checked that the “crashplan” account has permissions to the backup directory, I have also tried changing to an alternate backup location without success

    What is interesting is that the client says that the connection has been up for 45mins, so it seems to definitely be connecting to the NAS, which makes it really seem permission related etc

    Any ideas?

    Reply
    1. patters Post author

      Sorry, not really sure. The log which I displayed in the package’s Log tab is actually the activity history. You could also try looking at the two engine logs, which you’ll need to use an SSH session to see. They are:
      /volume1/@appstore/CrashPlan/log/engine_output.log
      /volume1/@appstore/CrashPlan/log/engine_error.log

      I’m also not sure how the backup engine will cope with having a smaller than standard RAM allocation. I use a 256MB system (192MB for Java), and it could be that a 128MB syno (80MB for Java) is just not enough.

      Reply
    2. vomesh

      I encountered the same error but was able to fix it. Once I created the “crashplan” share and restarted the CrashPaln app on the NAS it auto created the “backupArchives” as documented above.

      However, for me when trying to use that as the path for sharing I get the error you got. I simply used the CrashPaln gui to configure the NAS to point to the root of “/volume1/crashplan”. Once I made this change I was able to connect to the NAS from other computers and begin backups with no errors.

      I will say my fix may not be the most graceful and can probably be fixed in some other way. I am just happy to have a working solution for me and hope it helps others. Patters I greatly thank you for making this package.

      Reply
  3. Aidan

    Really pleased that you have created a package! Thanks for all your efforts.

    It should make this so much easier to install in future

    Reply
  4. Jeppe

    Hi,

    Great to see a package!

    What if the Crashplan client gets an upgrade?

    Will it be enough to just remove and re-install the package?

    Best regards
    Jeppe

    Reply
    1. patters Post author

      Yes, that should be all that’s required, then you’d use the Adopt Computer option in the client.

      Did any of you guys use the Intel package? I still have no idea if it works…

      Reply
      1. Marc

        How often does CrashPlan push a client upgrade and how would we know that this has occurred so that we can go through the re-install + adopt process?

        PS – This is awesome! thank you for sharing.

      2. patters Post author

        It seems to have been on 3.0.3 for a while now. I think I read somewhere that they might not instigate auto-updates of headless engine installs since they had a lot of support requests to deal with last time they tried that. When a new one comes out, let me know on here and I’ll have to update the package.

      3. Kais

        Dear,
        The engine did not accept my login so I reinstalled it on the crashplan.
        It then updated to version 3.15.2012 and stopped.
        I cannot make it work again.
        Can you help?
        By the way thanks for the sharing and instructions, they are great!

  5. Chuck

    Wow, this may be exactly what is tipping me back to buying a Synology. Since you are running the DS111, I guess I can then assume that this will work on the DS411 (since it is supposed to be the same processor, just with more memory). Does anyone know if it will run on the DS411j, despite the memory being less? I thought about getting the DS411+II but then decided it was too expensive for my taste considering I am already going to buy a mac mini server to replace my hackintosh. I was going to buy an external RAID box, but then it becomes convoluted since the choices for RAID boxes aren’t great… and the mac now has a thunderbolt connector which has not yet seen wide adoption.
    Anyway… thanks for doing this! I personally think Synology and/or Crashplan should just partner on this… it makes for a killer home solution!

    Reply
    1. patters Post author

      There’s a lot of cool stuff to run on these Synology products now, but RAM is the biggest limiting factor. You often have to choose which packages are active. I’d avoid the J series for that reason. I wish there was a DIMM slot in there, or at the very least some pads we could DIY solder more RAM onto…

      Reply
  6. spiderv6

    Got it running on my 1511+

    Install instructions were perfect – no issues.

    14.9GB on its way to Crashplan right now!

    Thanks so much!

    Reply
      1. Rob

        I don’t know if there have been any updates to break the critical links, or if I’m doing something very simple wrong.

        I have a DS1511+ with DSM up to date (4.2) and I added Java SE for Embedded 6 (1.6.0_38-0017) per your package without incident, but when it comes time to install CrashPlan 3.5.2-0021, it tells me that “a shared folder called ‘public’ could not be found…” Obviously I created the shared folder using the control panel using lower case letters and set every privilege option to read/write (btw: java worked with the download saved there).

        I appreciate any help, as this is very frustrating.

        — Rob

      2. Rob

        Thank you for the help and quick reply. I disabled and re-enabled the windows file service from Control Panel…Win/Mac/NFS. Tried it several times with many reboots and no change in CrashPlan install behavior.

        — Rob

      3. Rob

        Unfortunately, I still haven’t had any luck installing your package. I was wondering if you might have any other suggestions. I appreciate any help you could provide. Thank you.

        — Rob

      4. patters Post author

        That looks fine to me. Try running this (which is how my CrashPlan script locates the public folder):
        cat /usr/syno/etc/smb.conf | sed -r '/\/public$/!d;s/^.*path=(\/volume[0-9]{1,4}\/public).*$/\1/'

      5. Rob

        http://pastebin.com/dc28rwSS

        It doesn’t appear to do anything. No error report, no response.

        I’ve seen similar behavior trying to install other packages. I’ll get to a certain point in a tutorial, and some steps just don’t seem to have any effect.

        Thanks again for your patience. There’s probably something simple, but I’m too much of a linux noob to really troubleshoot the problem. I guess I only know enough to get myself in trouble. So far, my failure record has included. 1.) Manual install of CrashPlan, 2.) compile/install TVHeadend 3.4, 3.) compile install HDHomeRun drivers, and 4.) trying to set up a sqlserver to host a common XBMC library database for several clients.

        Take care.

      6. patters Post author

        Hmm. There’s definitely something different about your NAS – can you try that same command but swap sed for /bin/sed

      7. Rob

        Ok, So… I tried that, and upon executing the bootstrap script, it returned: “cd: line 5: can’t cd to bootstrap”

        http://pastebin.com/JhAyvyay

        BTW: My /root/.public file has the “PATH=…” and “export PATH” lines commented out. Could that be a reason it can’t execute system commands?

        Thanks

      8. patters Post author

        Ok – uncomment the ‘PATH=’ and ‘export PATH’ lines at the top. Restart your NAS, then uninstall and reinstall the Java package. That should fix the missing timezone part at the bottom (‘TZ=’). Then you should be ok.

      9. Rob

        Success!

        Here’s what worked…
        – I un-commented the “Path” and “export Path” lines in /root/.profile and /etc/profile .
        — That allowed your search command to work in ssh, but the install package still couldn’t find the public folder.
        — It also allowed me to uninstall the Bootstrap which I couldn’t previously do.
        – I then uninstalled java SE, but when I went to reinstall it told me that it was still installed at /volume1/@appstore/java6 . Of course that folder didn’t exist, but I found that this path was being called out in the two profile files
        — So, I commented out the java lines in those files, and got java to reinstall
        – Lastly, your package worked perfectly, as designed, at last.

        Thanks again for all your help. You’ve been incredibly patient.

  7. Diaoul

    @patters: Do you think you can make your packages available in spksrc ? https://github.com/SynoCommunity/spksrc

    I understand that what you’re doing is mostly packaging and not cross compilation but spksrc aims to provide a unified way to build SPKs as well as SPK’s source code versioning.

    Also, how have you managed to read all the licensing stuff of Java? Is that allowed by this license to distribute java SPKs?

    Reply
    1. patters Post author

      Hi, I’ll take a look at that but it looks like a significant time investment to re-do everything so I don’t think it’s likely I’m afraid.
      For Java, there shouldn’t be any licensing issue because I’m not distributing a single byte of the JRE, only scripts to install it. The user has to provide it, and to do so they must independently agree to the Oracle licence themselves.

      Reply
      1. Diaoul

        Ok nevermind, I think i’ll just look in your code and see what I can grab from there that would fit in spksrc (and put credits of course)

        In case you make other SPKs, please consider using spksrc for that. This is a very easy to use framework.
        For example to add a package : https://github.com/SynoCommunity/spksrc/commit/198120d9f433ebe9482056a5160caaadd5a4d099 (PLIST being generated automatically but yet require some manual little changes)
        To compile it just “cd cross/lame && make ARCH=88f6281″ (everything is handle automatically, from toolchain download to built binary)

        And a sample commit that adds a SPK : https://github.com/SynoCommunity/spksrc/commit/74febca4a171ea772f4823df518682fe768b7500
        “cd spk/mpd && make ARCH=88f6281″ build automatically an installable SPK

        Don’t hesitate to contact me (email or GitHub) in case you wish to contribute :)

      2. patters Post author

        Thanks. Can I ask that you please don’t make alternate versions of the packages I’ve already done, unless of course I give up maintaining them in future, as that would get pretty confusing for people. As you can see, I’ve tended to focus on Java apps. It makes sense to keep these on the same repo as the Java spks themselves.

      3. frillen

        Agree, why change any thing. This work. I cant be more simple? You add patters url to DSM and then the installation is just one click away.

        So, nice :)

        And thanks

  8. eff_cee

    Thanks for the guide – I’ve followed it and crashplan is running OK on my ds110j. My problem is I cannot connect to it using the desktop client on another machine. From your notes you should simply be able to change the IP address in the ui.config file – is this all you did ? Did you have to setup the ssh tunnel too ?

    When I do that and start the desktop client – it never manages to connect to the sevice on the syno. I’ve validated that the service on the syno is listening on the default port.

    Anything I’ve missed ?

    Reply
      1. Fraser

        Hi patters, yes I did and tried rebooting too. I’m still confused re the ssh tunneling thing, do I need to do that to connect the desktop client to syno service to be able to configure it ? Or should I simply be able to change the IP address as per your guide ?

      2. patters Post author

        The tunnelling thing is unnecessarily complicated if you’re on the same network as the syno. You should simply be able to make the edit to that ui.properties file on the client. It is possible though, that the 80MB RAM allocation that is necessary on the J series NAS is just not enough (CrashPlan is designed to use 512MB). Perhaps someone else can comment if they have it running on a J series.

      3. Fraser

        Well, got it working finally, but I still had to use the ssh tunnelling method.

        Without the ssh tunnel, this is what happens when I test with telnet using Win 7 64 pc (where desktop client installed):

        telnet ipadd-of-syno 4242 gives me “connection refused”

        telnet ipadd-of-syno 4243 I do get a connection but with loads of strange chars of which I can make out only DHPublicKeyMessage

        Would be nice to get it working without the ssh tunnel :-)

    1. patters Post author

      Yep, that was all taken care of in my Java package right from the beginning. That’s why you have to fetch the syno toolchain for it.

      Reply
  9. Guillou

    My Synology DS211J runs actually crashplan.
    About 60MB of RAM and 70% of CPU charge for the whole NAS.
    In crashplan, CPU usage is set to 70% when NAS is idle otherwise 10%.

    Reply
  10. Fragglesnot

    Patters,

    If I wanted to uninstall the crashplan that I have on my DS (installed manually via wiki instructions), and use your package instead – do you know how I would do that? I followed the steps not really knowing Linux or what I was doing, and I’m not sure how to remove it cleanly.

    Thanks if you can provide any help.

    Reply
    1. patters

      Sure. Here’s the recipe for that:

      cd /tmp
      #we need to download the installer bundle to get the uninstall script
      DL_FOLDER=http://download.crashplan.com/installs/linux/install/CrashPlan
      DL_FILE=CrashPlan_3.0.3_Linux.tgz
      wget "${DL_FOLDER}/${DL_FILE}"
      gunzip "${DL_FILE}"
      tar xvf CrashPlan_3.0.3_Linux.tar
      rm CrashPlan_3.0.3_Linux.tar
      cd CrashPlan-install
      #move any friends' backup data to where the package will find it
      [ -d /volume1/crashplan ] || mkdir /volume1/crashplan
      [ -d /opt/crashplan/backupArchives ] && mv /opt/crashplan/backupArchives /volume1/crashplan
      #fix up the uninstall script so it actually runs
      sed -i "s%^#!/bin/bash%#!/opt/bin/bash%" uninstall.sh
      ./uninstall.sh -i /opt/crashplan
      cd /tmp
      rm -r CrashPlan-install
      [ -e /usr/syno/etc/rc.d/S99crashplan.sh ] && rm /usr/syno/etc/rc.d/S99crashplan.sh
      [ -e /etc/init.d/crashplan ] && rm /etc/init.d/crashplan
      Reply
      1. Fragglesnot

        Patters,

        That worked a treat. Thanks for walking me through that. The adoption process went seamless as well! All I had to do was chown my old friend archives.

        Thanks again for the hard work.

      2. Ray

        Thanks for all the hard work! When I try to run this script to start clean, I get to./uninstall.sh -i /opt/crashplan and I get the following error:
        who: invalid option — r
        BusyBox v1.16.1 (2012-03-07 15:47:21 CST) multi-call binary.

        Usage: who [-a]

        Show who is logged on

        Options:
        -a show all

        ERROR: cpio not found and is required for uninstall. Exiting

        Any ideas?

      3. patters Post author

        Run this first, then try again:
        export PATH=/opt/bin:/opt/sbin:$PATH

        There are some other steps required though. Search this page from the top for the word “recipe”. I wish the author of that Synology wiki would (a) allow people to contribute, (b) demonstrate how to undo the steps, and (c) mention my package so people can make an informed choice before they set about making complex changes to their syno!

      4. Flavio

        Hi patters,
        I have uninstalled my crashplan via your recipe, it works perfectly!
        I just faced an issue on moving the friends’ backup data.
        All my computers backup to Synology, and it was like this for more than 6 months already, so you can imagine the amount of data I had on the /opt/crashplan/backupArchives.
        It was huge!
        And the mv command did not work since it first copies all the data to the destination, and only after that it deletes the source. In my case you can imagine what happend… I did not have the enough
        space for duplicating all the data.
        So I started hunting a way to do that in a more “clever” way, and I found out that the rsync command is much better for that.
        So I used the following to move the data, and I suggest everyone to do the same, since not only moves and deletes, but also keeps all the file permissions, properties (including dates)

        rsync -av –remove-source-files –ignore-existing –stats –progress /opt/crashplan/backupArchives/ /volume1/crashplan/backupArchives

        Thanks,
        Flavio Endo

      5. Erik F.

        So i’ve just copied and pasted the exact commands here minus the #tags and I’m getting -ash: ./uninstall.sh: not found

        any ideas?

      6. patters Post author

        That was written a long time ago. I don’t really know you may have installed CrashPlan manually, or even which version. I would ask the question on whatever guide you followed.

  11. Razvi

    On my DS209 the serviceHost keeps reseting to 127.0.0.1. Do you have any ideea why ?
    I use CrashPlan on a x86 gentoo linux server and it works. On DS209 it seems like CrashPlan is overwriting my.service.xml when it starts.

    Reply
    1. patters Post author

      Did you copy an existing config file over or anything like that? If you did you’d have to remember to reset ownership for the crashplan user:
      chown -R crashplan /volume1/@appstore/CrashPlan

      Take a look at the engine log files I mentioned in the last point of the Notes section of the post, they’re normally pretty explicit if there’s a problem.

      Reply
  12. Richard

    Trying to install Crashplan on DS212. I have downloaded and installed the package and it tells me that the service is running. Also amended the ui.config file on the client to point to the IP address of the NAS.

    When I try to connect, it tells me “Unable to connect to the backup engine, retry?”

    I have stopped and restarted the service and even rebooted the NAS, but to no avail. Has anyone else managed to get it to work on this model?

    Any help or advice would be greatly appreciated.

    Reply
    1. patters Post author

      Can you look at the file /volume1/@appstore/CrashPlan/conf/my.service.xml and check the value inside the tag? It should be 0.0.0.0 but someone else on here reported that under certain circumstances it’s being reset to 127.0.0.1 (which would prevent you connecting from another computer). You’ll need to connect via SSH to do this – I recommend copying the file to your public share then viewing it on your computer if you’re unfamiliar with Linux:
      cp /volume1/@appstore/CrashPlan/conf/my.service.xml /volume1/public

      Reply
      1. Richard

        Thank you for the reply- you were correct in that servicehost was set to 127.0.0.1. I set it to 0.0.0.0 and copied the file back, but still no joy I’m afraid. There are no messages in the service log, apart from the one that says the service was started.

        Any more ideas?

      2. patters Post author

        Take another look at the xml – CrashPlan may have set it back to how it was. Failing that, try connecting using the SSH tunnel method.

      3. Richard

        Rechecked the xml file and it was still OK, so tried the SSH tunnel method and bingo – connected perfectly!!!

        Music and pics on their way to CrashPlan as I write this!!!

        Thank you so much for your help. It is very much appreciated!!!!

      4. Richard

        New problem – after rebooting the NAS or stopping or restarting the service, the backup does not restart. It seems to loose all of the configuration.

        When connecting the client, I am presented with the setup screen to either enter my name and email or login with my Crashplan account ID. After doing so, all the settings have been reset and I have to “adopt” my NAS in order for the backup to start.

        Is there a way around this? Sorry, but I am a complete Linux novice, so no idea where to start!

      5. Richard

        Solved!!!

        After reading the later posts, I tried enabling the User Home Service, as suggested. After that, restart the CrashPlan service, reconfigure the backup in the client program.

        Subsequently, restarting the service or NAS automatically starts the backup again!!

  13. Román

    Another one with problems with the connection from a remote client…

    I changed the my.service.xml manually to 0.0.0.0, and every time I start the service or restart the synology it is changed automatically to 127.0.0.1 again. If I go to /volume1/@appstore/CrashPlan/bin and execute ./CrashPlanEngine start it works!! I think it is a problem with the package, not with the binaries. I can observe in the engine_output.log this:

    [02.04.12 15:12:53.330 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.330 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.331 INFO main com.backup42.service.CPService.main ] STARTED CrashPlanService
    [02.04.12 15:12:53.348 INFO main com.backup42.service.CPService.main ] CPVERSION = 3.0.3 - 1300223300091 (2011-03-15T21:08:20:091+0000)
    [02.04.12 15:12:53.348 INFO main com.backup42.service.CPService.main ] LOCALE = English
    [02.04.12 15:12:53.351 INFO main com.backup42.service.CPService.main ] ARGS = [ ]
    [02.04.12 15:12:53.351 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.638 INFO main com.backup42.service.CPService.start ] Adding shutdown hook.
    [02.04.12 15:12:53.642 INFO main om.backup42.service.CPService.copyCustom] BEGIN Copy Custom, waitForCustom=false
    [02.04.12 15:12:53.642 INFO main om.backup42.service.CPService.copyCustom] NOT waiting for custom skin to appear in custom or .Custom
    [02.04.12 15:12:53.643 INFO main om.backup42.service.CPService.copyCustom] No custom skin to copy from null
    [02.04.12 15:12:53.643 INFO main om.backup42.service.CPService.copyCustom] END Copy Custom
    [02.04.12 15:12:53.671 INFO main om.backup42.service.CPService.loadConfig] BEGIN Loading Configuration
    [02.04.12 15:12:53.838 INFO main ackup42.common.config.ServiceConfig.load] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    [02.04.12 15:12:54.384 INFO main ackup42.common.config.ServiceConfig.load] Loading from my xml file=conf/my.service.xml
    [02.04.12 15:12:54.602 INFO main ackup42.common.config.ServiceConfig.load] Loading ServiceConfig, newInstall=false, version=2, configDateMs=1328361186324, installVersion=1300223300091
    [02.04.12 15:12:54.627 INFO main ackup42.common.config.ServiceConfig.load] OS = Linux
    [02.04.12 15:12:55.007 INFO main om.backup42.service.CPService.loadConfig] AuthorityLocation@28591825[ location=central.crashplan.com:443, hideAddress=false ]
    [02.04.12 15:12:55.008 INFO main om.backup42.service.CPService.loadConfig] END Loading Configuration
    DELETED file=conf/service.model
    DELETED file=conf/service.login
    FAILED to delete file=/volume1/@appstore/CrashPlan/confimport_key
    FAILED to delete file=conf/service.copier
    CACHE DELETED cacheDir=/volume1/@appstore/CrashPlan/cache
    [02.04.12 15:12:56.740 INFO main om.backup42.service.CPService.loadConfig] BEGIN Loading Configuration
    [02.04.12 15:12:56.741 INFO main ice.CpsFoldersDeprecated.moveConfigFiles] CpsFoldersMigrate is not necessary. /volume1/@appstore/CrashPlan/conf/my.service.xml file does not exists.
    [02.04.12 15:12:56.742 INFO main ackup42.common.config.ServiceConfig.load] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    [02.04.12 15:12:56.819 INFO main ackup42.common.config.ServiceConfig.load] Loading ServiceConfig, newInstall=true, version=2, configDateMs=null, installVersion=1300223300091
    [02.04.12 15:12:56.820 INFO main ackup42.common.config.ServiceConfig.load] OS = Linux
    [02.04.12 15:12:56.820 INFO main ackup42.common.config.ServiceConfig.load] Initializing backup paths last modified to now. lastModified=1
    [02.04.12 15:12:56.939 INFO main om.backup42.service.CPService.loadConfig] AuthorityLocation@19658898[ location=central.crashplan.com:443, hideAddress=false ]
    [02.04.12 15:12:57.062 INFO main om.backup42.service.CPService.loadConfig] END Loading Configuration
    jtux Loaded.

    It loads the files correctly and then it checks for the config files again, and do a new install!!

    Someone knows how to fix it?

    Thank you for your work!

    Reply
    1. patters Post author

      This was the same issue I was getting when I tried to get in place upgrading of the package working. Every time it would claim the my.service.xml file didn’t exist and overwrite it with a new default one.
      Can you try running these two commands and check the xml file to see if the changes happened as expected?

      sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /volume1/@appstore/CrashPlan/conf/my.service.xml
      sed -i "s%.*%/volume1/crashplan/backupArchives/%" /volume1/@appstore/CrashPlan/conf/my.service.xml

      Did the folder /volume1/crashplan/backupArchives get created ok? Maybe that’s the issue.
      Or maybe your system has a different version of sed that’s corrupting the XML somehow.

      Reply
      1. Román

        The folder is created:

        ls -ld /volume1/crashplan/backupArchives
        drwxr-xr-x 2 root root 4096 Feb 4 12:58 /volume1/crashplan/backupArchives

        When I executed the sed commands you send me, it corrupts my config file!! :(

        cat /volume1/@appstore/CrashPlan/conf/my.service.xml
        /volume1/crashplan/backupArchives/
        /volume1/crashplan/backupArchives/
        /volume1/crashplan/backupArchives/

        I’ve version 3.2 build 1955, which is the latest version available from the website. How can I correct this??

        Thank you very much for your help!

      2. Román

        Perfect!! I have the solution. I’ve been parsing the start and stop package scripts, and I have found a error. When you create the user crashplan, in the /etc/passwd file you use the csh shell and a home directory that not exist, so when you execute CrashPlan.sh start and do the command su – crashplan -c “…” it fails and the script doesn’t return 0.

        This is the solution I’ve found:
        1.- Edit the /etc/passwd file and change the crash plan user line to this:
        crashplan:x:1031:100:CrashPlan daemon user:/volume1/@appstore/CrashPlan:/bin/sh

        2.- Execute
        rm /volume1/@appstore/CrashPlan/syno-marker.txt
        in order to change the my.service.xml fle on the next run.

        Stop and start the package from the web interface.

        Working flawlessly! Thank you very much for this package. Great work!

      3. patters Post author

        The script launches su with “-s /bin/sh” which overrides the shell setting in /etc/passwd anyway, so there’s clearly something quite different about your system (is it bootstrapped, and maybe you’ve got alternate versions of some of the core binaries on there?). Can you try running those sed commands again, but using /bin/sed? Is your su binary different perhaps. Maybe you need to use /bin/su or whatever the default syno one is. I know you have a workaround, but I’d like to discover the cause so the next version won’t have this issue.

      4. Román

        Ok, I will give you more indications about the entire process. I use a DS1010+ with sinology DSM 3.2 1955. It is not bootstrapped. Only installed VPN Server as a Synology package. I downloaded the DSM 3.2 toolchain needed for your Java package and renamed it in public folder to the name that your script needs. Java installed correctly.
        Then I installed your CrashPlan package and it installed correctly.

        To make it work, I executed “cat /usr/local/etc/rc.d/CrashPlan.sh” and executed the commands inside one by one.

        sed -i “s/127\.0\.0\.1/0\.0\.0\.0/” ${SYNOPKG_PKGDEST}/conf/my.service.xml and “sed -i “s%.*%/volume1/crashplan/backupArchives/%” ${SYNOPKG_PKGDEST}/conf/my.service.xml”
        executed correctly and changed the file as expected, so it is not a problem with sed.

        The problem I found is that when I executed su – crashplan -s /bin/sh -c “${SYNOPKG_PKGDEST}/bin/CrashPlanEngine start”, the command failed saying that the home directory of the user does not exist, and the service didn’t start.

        So I changed the /etc/passwd file crashplan user home directory to a directory that exists and after that everything is working flawlessly. I restarted the service within DSM, rebooted the NAS and I can connect it every time without problems.

        I hope this helps you!

  14. Román

    Wait! I copied the text from the file, and the second sed is modified when I submit the form. That is why it didn’t work when you send me the first time! manifestPath text doesn’t appear.

    Reply
    1. patters Post author

      Ok, well spotted – so WordPress is interpreting some parts of the command as a tag when we post them on here. On to the real problem…
      Do you have the ‘User Homes’ service enabled in DSM? Not sure what it’s called in other languages, but it’s the thing that gives each user their own home directory in /volume1/homes. It is required, but I forgot to mention that in the instructions. The error you posted mentions a home directory problem – not a shell problem. So perhaps that’s the issue.

      Reply
      1. Román

        I dont think that this was the problem, because the original directory of the user was /var/system/…

      2. patters Post author

        Could you try removing the package, enabling User Homes and re-installing, then try?I’m not at home, and have several other packages which depend on User Homes so it’s difficult for me to test that.

  15. Marc

    I have it installed and the service is running on my DS212J. Hover when I install the client on a computer and modified the servicehost address to match my DS212J. However, when I run the client I get “Unable to connect to the backup engine. Retry?

    Any feedback as to what I should do?

    Thank you again!

    Reply
    1. patters Post author

      Can you remove the package, enable the User Homes service in the User Control Panel, then re-install the package – and see if that fixes it?

      Reply
      1. Román

        Tested and it works for me too!! Thank you very much for your support!

        I have another question. How can we give support to non US ASCII characters??

      2. patters Post author

        Great! Thanks, I’ve updated the installation instructions. Do non-US folders not get seen properly? They should do. My Java package installs the Linux locale support that Java requires (which is why you need to supply the toolchain). What does your syno display if you type locale?

      3. Marc

        If I go to Control Panel> User > I see a button that says “User Home”. If I click it I see a window that says Enable User Home Service (not Homes). Is this what you want me to check? If so, it was already checked.

        Not sure if this is helpful but I do see a folder named homes and in it is a folder named crashplan.

      4. patters Post author

        Yes that was what I meant. Hmm. So can you connect the CrashPlan client ok if you use the SSH method? What does the CrashPlan log tab show in Package Center – can you see that the engine started?

      5. Marc

        I tried the SSH method and when I attempt to telnet localhost 4200 I get “telnet: can’t connect to remove host (127.0.0.1): Connection refused.

        Am I wrong to guess that this is a firewall issue or something is wrong with my ui.properties file?

  16. Román

    locale command does not exist after the installation. I’ve followed the instructions on the sinology wiki:

    cd x86_64-linux-gnu/x86_64-linux-gnu
    cp sys-root/usr/bin/locale /volume1/@appstore/java6/jre/bin
    cp sys-root/usr/bin/localedef /volume1/@appstore/java6/jre/bin
    cp -r sys-root/usr/share/i18n /usr/share
    mkdir /usr/lib/locale
    localedef -f UTF-8 -i en_US en_US.UTF-8
    echo “LANG=en_US.UTF-8″ >> /etc/profile
    echo “LC_ALL=en_US.UTF-8″ >> /etc/profile
    echo “export LANG LC_ALL” >> /etc/profile

    I’ve changed the /opt/bin directory and copied the locale binaries to the java binaries directory because I don’t have my sync bootstrapped. ¿Maybe this is the problem? ¿/opt must exist after executing your package?

    I installed Java 6 package, because my synology is x86_64, and you said 7 version does not work with intel based NAS, is this correct?

    Reply
    1. patters Post author

      I think the problem is that you used the DSM 3.2 toolchain and renamed it, not the 3.1 one as requested. If you use the 3.1 toolchain, everything is done for you by the Java package. I would suggest you back out the manual changes you made as per the wiki before you do it though.

      Reply
      1. patters Post author

        Yes. It’s only extracting the two required locale binaries and the various locale definitions. Notice that the version numbers of glibc and gcc are the same for the 3.1 and 3.2 toolchains in any case. However, as you have demonstrated, the locations of those files inside the archives must have changed. I have updated the Java installation notes to make that clearer.

      2. Román

        Ok! Done it! Working well!

        Maybe the problem is that with the 3.1 Toolchain, 32 bit and 64 bit is included in the same tarball, because with the 3.2 Toolchain they are separated and I downloaded the 64 bit one. Are you using 32 bit always for the installation?

        Another question, when Synology releases a new firmware and I install it, it will be necessary to install everything again? The locale settings remain configured?

        Thank you again for your great help!

  17. Fragglesnot

    patters,

    I had an issue where the service host in my.service.xml file got reset from 0.0.0.0 to 127.0.0.1 on it’s own – like previous users have mentioned. I had the user homes service enabled all the while. Simply changing it back to 0.0.0.0 and restarting crash plan allowed remote management again – but do you have any ideas why this is getting changed? I’ve read through this thread, and I know it has come up – but I’m not clear on what causes that and how to prevent it. (Not positive, but it seems like a DS reboot is what causes it?) I could do more testing if you need more data.

    Thanks

    Reply
  18. Chuck

    After nearly ordering a Synology, then changing my mind… this blog changed my mind and I ordered the DS411 (Marvell CPU 528mb RAM). I put two 1gb drives from my hackintosh in it and another 2gb drives I bought the day the NAS arrived. Since I bought this new based on the blog, it is a fresh system. I updated to the -1955 version of the system before starting anything. Here are a few things I noticed along the way:
    Make sure for the ARM to select this Java package (grabbed the wrong one and the script scolded me):
    ejre-7u2-fcs-b13-linux-arm-sflt-headless-22_nov_2011.tar.gz

    I installed the Java package per patters blog, then installed crashplan. I started it using the checkbox, then stopped and restarted. Before I had a chance to see it not work, I saw the update posting around turning on home, so I removed crashplan, turned on home, reinstalled crashplan but this time turned it on after installation… stopped the package… then restarted the package.

    I then used a macbook to test crashplan using a free 30 day trial account and 600mb of test files. For those who don’t know, on a mac, the ui.properties files is in the conf folder when you right click on the crashplan application and select “show package contents”. The conf folder is within the subfolders under java and you have to select “unlock” when you start to change the line as outlined by patters.

    Everything worked fine so I decided to use my real crashplan account. This actually turned out to be a royal pain. Everytime I tried to login using my paid account with the DS411 setup in the ui.properties, the system would automatically ignore that I had completely deleted crashplan and application support from the computer and default to my free trial account. It turns out that I needed to go into the crashplan desktop software, select Destinations -> Computers, then click on the DiskStation and tell it to remove the Diskstation from that account. Once I did that, I could get it to remember the DiskStation. It turns out that crashplan (online) remembers the UUID of each computer and will automatically see the DiskStation. I imagine that this may also be the case for people who are trying to use another computer in certain circumstances, though the adoption process should take care of this. In any case, it is something to consider when troubleshooting… just be careful, though, since doing this can delete your entire archive in the cloud if you delete the wrong thing!

    I am now watching it analyze the disk station and hoping it remaps the files I have transferred to it to the files in the cloud.

    I can also confirm that I had restarted the DS411 and it maintained the setup.

    Some of this may be obvious to people doing this, but if it helps someone else, then it is worth the write up!

    At this point, I think that Synology should send patters a thank you as well! This tipped me back to buying the Synology! Thanks for this site!

    I do have one small question… I am now trying to decide what to do with backing up computers. I thought about Time Machine, but that seems somewhat challenged when it comes to crashplan since it results in far too much uploading unnecessarily. Has anyone thought of alternatives? I also use carbon copy cloner so that would seem like a viable alternative to Time Machine.

    Reply
    1. Chuck

      PS: as patters noted… when downloading the files use Firefox on a Mac and avoid Safari. Safari auto expanded the files and I had to start over…

      Reply
  19. Chuck

    Okay, so it seems to be working I think. At first I thought it analyzed my 1tb of information and then decided to upload everything again, but after looking at the screendump above, I believe it is working. The reason I say this is that it would appear to be uploading files, but when I look at the speed it is 54 Mbps. I only have a 5Mbps connection and remember it took several days to upload what I had before… this says it will be complete in 1.3 days. Does this sound about right to everyone?
    If this is the case, should I just leave the old things selected once it is done or will everything be remapped and the “missings” will just go away? It would be nice if there was a way to clean it up once done since I can imagine I will wonder what the missings are a few years from now!

    Reply
    1. patters Post author

      I don’t know about this scenario in particular, but I’m guessing the CrashPlan engine will need to checksum all of your 1TB worth of local files, which I would assume will take a fair amount of time.

      Reply
  20. DS411j

    Hello,

    I am trying to install this on my Synology DS411j. I have successfully installed Java 7, but when I try to install Crashplan I get the following error : “Java is not installed or not properly configured. The Java binary could not be located”. The Java package shows up in the “installed” packages (although its status is “stopped”, but I understand that is nomal.

    Any idea ?

    Reply
      1. DS411j

        Thanks a lot ! Seems to work now (although I had also downloaded Java with Firefox on the first time)

        Adopting my previous profile now….

  21. schmeel

    I had a problem similar to others discussed here – every time I restarted CrashPlan on my Synology DS411, it came up as though it was a new computer. I ended up with 4 different computers on my CrashPlan Family account all with the same name. Only one (the original) had any data backed up.

    This was because User Homes was not enabled – CrashPlan could not save my GUID (computer identity) to /var/services/homes/crashplan/.crashplan/.identity. Each time it started, it thought it had to create a new GUID.

    Note – if you started off with User Homes disabled, you don’t need to reinstall CrashPlan; I was able to just do this:

    Enable User Homes (DSM Control Panel – User – “User Homes” button)
    Restart CrashPlan (restart the NAS, or use the CrashPlan commandline interface (http://support.crashplan.com/doku.php/client/manual_commands))
    Start the desktop client
    Log in and adopt you previous profile

    MANY THANKS for the excellent package, and I commend you for following the comments here and providing support and updates. It’s more than many commercial software vendors can offer.

    Reply
    1. patters Post author

      Enabling user homes is in the instructions now so hopefully that’ll reduce the problem. Good spot on noticing that .identity file. I guess I will be able to make the package upgradable after all (for when CrashPlan release a new version).

      Reply
  22. Ed

    Hi (again!),

    Another fantastic piece of work patters – installed no problem and currently backing up 120GB+ to crashplan.

    Considering I’m running this on a DS211j it is really managing (the limited) resources very well, much better than I thought it would!

    p.s. see my message on your java package post.

    Cheers,

    //Ed

    Reply
    1. patters Post author

      Thanks for the feedback. Donate button is at the bottom of the right-hand column.
      Since I enabled logging on the package repo at the end of Jan I’ve seen just under 4,000 unique IP addressed synos connecting from all over the world!

      Reply
      1. Ed

        You’re welcome. Thats an amazing number of downloads in such a short time, I can certainly say you have saved many people many hours!

        I hope you keep the passion to share your creations. Enjoy the drink! :)

        p.s My poor 211j is not truely man enough for the job, think I’ll have to get a more beefy syno sometime soon! (13 days for 55GB!) :)

      2. patters Post author

        Thanks frillen! I didn’t want to make the button too in-your-face, and it looks a bit strange at the top of the page. I’ll experiment…

      3. Ed

        Apologies for spamming your blog!

        I just realised the incredibly slow upload speeds were due to the CrashPlan client throttling upload traffic not the DS211j.

        I can’t believe I missed it, alas if anyone has the same problem, in the CrashPlan client, go to Settings -> Network and set your WAN limits to what you deem acceptable.

        I removed the throttle altogether and now have 4.3Mbps opposed to 300Kbps upload.

  23. Jason

    I can’t thank you enough for this. It was so easy for a linux novice like me. Installed and working perfectly on my new ds212j.

    Reply
    1. Jason

      I spoke a little too soon. I had to restart the NAS and the crashplan service didn’t restart with it. I had user home enabled and the identity was stored within it. I also used the 3.1 toolchain, not 3.2. After the first service restart, I could connect and manage the service with no issues. However, after the second restart, I could no longer connect.

      I found that, like others, the servicehost in my.service.xml was 127.0.0.1 instead of 0.0.0.0. I corrected that and now it works and restarts with no problems. I followed the instructions to the letter although I found that the crashplan/backupArchive folder didn’t have the permissions set correctly and the engine could not write to it. I am guessing there is still a thing or two missing from the scripts.

      Overall, however, I am very grateful to you for putting this together. I wouldn’t have been able to get this going otherwise. All seems to be working perfectly now after those two tweaks.

      Reply
  24. DS411j

    Hello,

    I have restarted the NAS and like others I seem unable to reconnect from my computer (service is running on the NAS)… I am not sure how to use SSH to modify my.service.xml

    Can somebody explain ? THanks !

    Reply
    1. DS411j

      Ok I managed to copy, modify, and copy the file back. But Crashplan is still unable to connect (it was connecting fine before the reboot). Any idea or suggestion ?

      Reply
      1. Jason

        Once you copy the file back, you need to restart the crashplan package for the change to take effect.

      2. DS411j

        Even after stopping and restarting crashplan (or even rebooting the NAS), it still doesn’t work…

      3. DS411j

        Without changing anything, just trying several more times it finally connected… So all working now. Thanks

  25. Rod

    Thanks. Worked on my DS712+. Adopted from my old computer successfully.
    I have the same issue as others. When I reboot the DS712+ I can’t connect to Crashplan. I have to change service host to 0.0.0.0, stop/restart crashplan and then I can connect. Not a big deal because I don’t connect to Crashplan often so I’m not going to try ssh method.

    Reply
  26. David Mc Nally

    I’m getting the following error in engine_error.log:

    Exception in thread “W5092156_ScanWrkr” java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager

    Backups don’t seem to run, however CrashPlan connects to the online service according to my account on crashplan.com. Any suggestions as to what I could do?

    Reply
      1. David Mc Nally

        Thanks for your quick reply. I’m running DSM 3.2-1958 on a DS212+/2.0 GHz ARM-processor.

  27. Nils

    Thank you very much for putting your energy in this! I have one user-invoked problem:

    I accidentally removed my NAS from the list of computers that showed up on the GUI (yes, I know that they ask you tif you are sure ;-) (and no, there were no back-up files on my NAS … yet). Somehow I am now unable to reconnect to the headless client on y NAS. I have no idea what I should do or change to make it work again. I tried removing and re-installing the package on my Synology, but the GUI keeps telling me that it is unable to connect. Before my mistake, everything appeared to work as intended …

    Reply
    1. Nils

      Found it already … needed to uninstall te GUI, then re-install, point the GUI to the NAS after which the NAS will be re-registered as a Crashplan device …

      Now the only thing left is the ‘The Backup location is not accessible’ message. How to get rid of that?

      Reply
      1. Nils

        Also fixed this one. Somehow the actual backup folder (51234… etc etc) that will hold the backup for a specific client, could not be created by the Crashplan client. I created it myself and now it works. Is this a known bug and (if so) is there a fix? (The root backup folder is /volume1/backup/crashplan. There all users have every permission possible … just to make sure)

      2. patters Post author

        That path should be /volume1/crashplan/backup. When you install, it’s created if it doesn’t already exist, and the owner is always set to be the crashplan daemon user.
        However, I haven’t tested that at all since I use the paid-for hosted option. Perhaps someone else can report their findings.

      3. Nils

        I tried the default (/volume1/crashplan/backup) first but that did not work. Therefore I switched to a folder of which I know that everyone had all access … And also there it did not work …

      4. Jason

        This may be related to the issue I found where the folder gets created but permissions to the folder are not set so that the engine can write to it. And just to be clear, the folder created on my system was /volume1/crashplan/backupArchives.

      5. patters Post author

        Well spotted Jason and Nils. Looks like I need to release an updated version of the package. I could have sworn that was in the postinst script, but I just checked and it’s not. Busy tonight, so it may take me a few days. I think the trouble is I get so many people commenting who have problems simply downloading the JRE file that I’m not noticing proper bug reports :)

        I’ll probably change it so it resets the bound interface from 127.0.0.1 to 0.0.0.0 on each start too (rather than just once).

      6. Jason

        I’m sure you saw my comments above regarding my original install Friday night, but in case you didn’t and since you will be releasing a new package, there still seems to be an issue with the servicehost configuration in my.service.xml. I was able to connect to the engine following the first restart as indicated in the instructions but was not able to following the second reboot. It appears something is causing the servicehost to change to 127.0.0.1 at that point. I manually changed it to 0.0.0.0 and it now survives reboot.

      7. patters Post author

        I’ll just have it force that setting every time. I’ve never hit that issue though, and my syno has restarted at least three times since I installed CrashPlan.

  28. Niek

    I would like to stop and start the Crashplan service with crontab. That doesn’t seem to be the problem but with my command the my.service.xml seems to be lost.

    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine stop
    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine start

    are the commands I am using in crontab. The start command seems to overwrite my existing config.

    Could you help me?

    Reply
    1. patters Post author

      There is user-specific data, and I guess you’re running the cron job as root. Try using su (like my start-stop-status script does):
      su - crashplan -s /bin/sh -c "/volume1/@appstore/CrashPlan/bin/CrashPlanEngine start"

      Reply
      1. Niek

        Thanks, that did the trick.
        Because Crashplan prevents my disks from hibernating (even when i set a backup time span, i see a proces scanning every 5 minutes) I use crontab to control the Crashplan service.

        If people want to know what I did:
        ssh to your Synology (ssh root@ip/hostname)
        vi /etc/crontab
        press i to activate insert mode

        Add the following lines:

        0       19      *       *       *       root    su - crashplan -s /bin/sh -c "/volume1/@appstore/CrashPlan/bin/CrashPlanEngine start"
        0       23      *       *       *       root    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine stop
        

        Modify times, days if you want. Search Google for crontab if you want to know all options.

        press ESC when done editing.
        type :w to save the file. (:quit! if you want to cancel and quit)
        type :q to quit vi.

        Do the following command in your ssh session:
        /usr/syno/etc/rc.d/S04crond.sh stop
        /usr/syno/etc/rc.d/S04crond.sh start

        Crontab is restarted now and loaded your new config.

        Disks should sleep now (except the period Crashplan is running).

        Make sure the “Verify backup selection” time is in the timespan your Crashplan is running. Else it never gets verified.

        @Patters: The only (small) problem I see right now is Crashplan status is always stopped in Package Center when started by crontab.
        When i do ps I see: /volume1/@appstore/java7/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx80M -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=3

        Service is running fine, just the status is wrong.

      2. patters Post author

        Good tip, thanks for sharing. As you no doubt discovered, the su remains necessary in /etc/crontab even though you might expect to be able to use the ‘crashplan’ user account directly, because new users created by script have a null shell by default. su lets us override that without needing to edit /etc/passwd.

        @Patters: The only (small) problem I see right now is Crashplan status is always stopped in Package Center when started by crontab.

        When a package is started in Package Center I think it records the state somewhere (I forget where) so that when you restart your syno the packages retain their states. When they’re started manually this doesn’t happen. I guess it ought to be possible to figure out what changes and add that to the cron job too.

      3. Niek

        Thanks for explaining that. It’s just cosmetic.
        I thought the status code grepped some info from ps but it’s not that simple :)

        Thanks again!

      4. patters Post author

        Well logically it should work how you were thinking (especially since I define the running state in the script start-stop-status) but it seems it doesn’t quite work like that in practice.

      5. patters Post author

        Got it! Package Center checks two separate things before a package’s status is reported as ‘Running’ in the UI. The status is first checked via the script start-stop-status, which would be ok for your cron job. But you also need to create a file in /var/packages/Crashplan called enabled. Delete this when your cron job stops the package.

  29. Richard

    Very happy with the easy install…
    I have one problem…

    The realtime scanning does not work.
    It only sees added or changed files when the “verify selection” makes a scan on the disks.
    On my MAC this does not happens, it rushes every-time a file is added or changed..

    I tried the command: backup.scan 42 -> this forces the check.. than it finds changed files for 1 time only..
    I tried rebooting the nas, does not solve problem
    I tried reinstalling the package -> does not solve the problem
    I tried reinstalling the NAS and all packages -> does not solve the problem

    Anybody an idea??

    Richard

    Reply
    1. Richard

      Diskstation 1812+ i386
      When checking the service.log I see the folowing error:

      code[03.05.12 16:59:03.221 WARNING W5369678_Authorizer  .BackupSetsManager.initFileWatcherDriver] BSM:: Exception initializing WatcherDriver - e=java.io.IOException: Platform is not supported, glibc: 2.3.6, kernel: 2.6.32.12^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@; BackupSetsManager[ scannerRunning = false, scanInProgress = false, fileQueueRunning = false, fileCheckInProgress = false, errorRescan=false ]                
      com.code42.exception.DebugException: BSM:: Exception initializing WatcherDriver - e=java.io.IOException: Platform is not supported, glibc: 2.3.6, kernel: 2.6.32.12^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@; BackupSetsManager[ scannerRunning = false, scanInProgress = false, fileQueueRunning = false, fileCheckInProgress = false, errorRescan=false ]                                                                         
              at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:391)                                                                                                                                                                                                                                                                                                                                                                                                      
              at com.code42.backup.path.BackupSetsManager.setUp(BackupSetsManager.java:126)                                                                                                                                                                                                                                                                                                                                                                                                                      
              at com.code42.backup.BackupManager.setUp(BackupManager.java:138)                                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.backup.BackupController.setUp(BackupController.java:384)                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.CPService.changeLicense(CPService.java:1958)                                                                                                                                                                                                                                                                                                                                                                                                                               
              at com.backup42.service.CPService.authorize(CPService.java:1895)                                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.peer.Authorizer.doWork(Authorizer.java:728)                                                                                                                                                                                                                                                                                                                                                                                                                                
              at com.code42.utils.AWorker.run(AWorker.java:149)                                                                                                                                                                                                                                                                                                                                                                                                                                                  
              at java.lang.Thread.run(Thread.java:662)                                                                                                                                                                                                                                                                                                                                                                                                                                                           
      Caused by: java.io.IOException: Platform is not supported, glibc: 2.3.6, kernel: 2.6.32.12^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@                                                                                                                                                                                                                                                                                                 
              at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.(JNAInotifyFileWatcherDriver.java:40)                                                                                                                                                                                                                                                                                                                                                                                                  
              at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:376)                                                                                                                                                                                                                                                                                                                                                                                                      
              ... 8 more
      Reply
  30. Chuck

    I seemed to have no problem with the setup after restarting until today. I had a drive fail a week ago and added a new one and let it rebuild. I have also shut down the station after that to move it around my desk. When I went to add another folder using the the client on my MacBook, It could no longer connect to the engine. I noted like many others that my.services.xml was no longer 0.0.0.0 so I followed directions above to SSH in and copy that file to the public folder so I could grab it onto my computer and fix it. I could not however figure out how to put the file back though. I tried just inverting the cp directions but that didn’t work. I could not get it back from public to the crashplan folder. I eventually uninstalled and reinstalled the package. Now the client sees it as a new computer so I guess I will have to adopt again (unless you have another idea). Does the install package fix this issue now so the XML does not revert (or is fixed when restarting)? I just recall the original adoption took days since I have over a terabyte on crashplan now.

    Thanks for the package and help!

    Chuck

    Reply
  31. rgliese

    If you got Problem to connect the Client to you Syno, you have to add the port 4243 in your syno firewall.

    Reply
  32. Lars

    Hi patters,

    will the script work with the new DSM 4.0 or should we stick with version 3? Also, are you planning converting the script to the new App feature in 4.0?

    Great work, already donated!

    Thanks, Lars

    Reply
    1. patters Post author

      Hi Lars – thanks very much for the donation. These packages all work on DSM 4.0. According to commenters on here there were Package Center issues with the very first DSM 4.0 build, but build 2198 is fine. I would recommend that you remove and reinstall Java after the upgrade, otherwise you’ll lose locale support and therefore unicode support in Java.

      Reply
  33. Ben

    This post has basically convinced me a Synology would work for my needs, since I can backup the NAS to Crashplan (my current system is an olde PC with FreeNAS).

    A couple performance questions:
    1) If the syno is set to deep hibernate during certain times, does the engine restart on wake and does the engine affect any scheduled hibernation? I don’t have a syno yet so I don’t exactly know how it deals with these things.
    2) If using WoL, should the engine be re-started manually?
    3) While the syno is uploading to Crashplan, are there any issues serving file shares (e.g., reading a mkv from a networked client)?

    Thanks.

    Reply
    1. patters Post author

      Hi, glad this is convincing people to buy Synology :)

      1) The Crashplan engine doesn’t seem to play nicely with the hibernation feature of the Synology – it just stays on all the time. Commenter Niek outlined a solution above to set a cron job to only run the engine in certain time windows to mitigate this.

      2) On bootup the Synology keeps packages in the same running states which were set in the Package Center UI.

      3) I haven’t tested this thoroughly, but Crashplan is invoked using nice so it should run with a very low priority. So in your scenario, the MKV should take precedence.

      Reply
  34. DS411j

    All right… Everything was working fine until I updated to DSM 4.0. now Crashplan is running, I can connect from my desktop to my NAS; I can select the files I want to backup… But crashplan is stuck saying :

    waiting for backup
    To do : 0 files
    Completed: 0 files
    Last backup : initial backup not complete.

    Any idea ?

    Reply
    1. DS411j

      I tried to reinstall java in case it was the error; and java installation fails. Am I the only one having this issue with DSM 4.0 final ?

      Reply
      1. Richard

        Same problem here!! Just updated to DSM4.0. Crashplan shown as started but nothing being backed up. I decided to re-install the package, but after un-installing, the CrashPlan package is not shown in the list of available packaged, and the link to the source http://pcloadletter.comlu.com is coming up as invalid – HELP!!

    2. Marc

      I had the same problem. It appears that DSM 4.0 had a bug. It also stopped my email notifications from working. They issued an update 4.0-2198 today that immidately fixed the problem. I could not update directly from DSM. Instead I downloaded it from the synology website and the update worked.

      Reply
  35. nullreturned

    Looks like the Package Repository is down. Just as was about to install some of your packages! Hope everything is okay and the repository can get back online.

    Reply
      1. nullreturned

        Marc,
        This is a new NAS setup, so I was already updated to the newest software. I was connected to the repository during the day, but then at night when I was going to install the software, it couldn’t connect. I think there was actually an issue with the hosting, but it was fixed by morning.

    1. patters Post author

      Hi – the repo is hosted on a free hosting service (http://www.000webhost.com) which does seem to have the occasional outage. Whenever I’ve noticed it’s only been the hosting control panel that has been affected. I’ll keep an eye out in case it’s frequent.

      Reply
  36. nullreturned

    Great package! I setup everything as directed, and it went smoothly on a 712+. I will admit I was a bit nervous when the Java reported failure and wouldn’t start as a service, but I pressed on. Once everything was said and done, I tried to connect via your IP method with no avail. I then followed an online guide to setting up tunneling via Putty, and it worked instantly. My systems are now backing up via Crashplan, and I’m really happy.

    Might be good to have the four steps outlined for the SSH Tunneling on the site for people who can’t get to it reliably through the IP Method. And with Putty, once you setup the connection, it only takes two seconds to get going!

    Thanks again!

    Reply
  37. DS411j

    I finally managed to reinstall crashplan on the 4.0-2197 (no idea why it failed the first 3 times and worked the 4th…), and made it works after adopting my previous computer.

    BUT I am also using my NAS for the inbound backup of a friend. And it has disappeared ! For some reason it looks like crashplan is NOT backing up my friend’s computer to the default location (/volume1/crashplan/backupArchives) but to /volume1/@appstore/crashplan/backupArchives

    As a result I suspect it has deleted my friend’s backup when I reinstalled Crashplan. I therefore have 2 questions:

    1) is it normal that we have to reinstall crashplan after a firmware update ?
    2) Why is crashplan not backing up to the default folder, and (now that my friend has started to back up again) how can I move the backup to the default location ?

    Reply
    1. patters Post author

      You guys that are getting the changes in conf/my.service.xml reset with each reboot (the listening IP changed back to 127.0.0.1, and the backupArchives location) – have you definitely installed the package after enabling the User Homes Service as per the instructions on this page?

      I have never had this happen, and I have installed/uninstalled/upgraded/rebooted the package around 50 times during testing, not to mention actually running it for its intended purpose.

      Reply
  38. patters Post author

    Apologies for the delay responding but I was on holiday and offline for the whole of last week. I have just noticed that on ARM systems, the symlink for libffi is removed once you upgrade DSM to a new version. To reinstate it run:
    ln -s /volume1/@appstore/CrashPlan/lib/libffi.so.5 /lib/libffi.so.5

    I also spotted in service.log.0 a permission denied error on a config file I didn’t know about. Try also running:
    chown crashplan /var/lib/crashplan/.identity

    I’m not sure whether it’s important since its contents appear to be identical to /var/services/homes/crashplan/.crashplan/.identity which seems to be the actual one that gets loaded.

    Reply
  39. Tony

    I’m having the 127.0.0.1 problem on my DS1511. I tried everything mentioned in the above comments but it keeps resetting whenever the Crashplan module starts. I even tried write-protecting the my.service.xml file, changing its ownership to root, etc. but obviously the process that’s overwriting it also has elevated privileges, because I see the file disappear and reappear with the 127.0.0.1.

    Using a SSH tunnel is inconvenient, so I hope a real fix for this is coming.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s