CrashPlan packages for Synology NAS

UPDATE – CrashPlan For Home (green branding) was retired by Code 42 Software on 22/08/2017. See migration notes below to find out how to transfer to CrashPlan for Small Business on Synology at the special discounted rate.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient, particularly against the threat of ransomware.

There are now only two product versions:

  • Small Business: CrashPlan PRO (blue branding). Unlimited cloud backup subscription, $10 per device per month. Reporting via Admin Console. No peer-to-peer backups
  • Enterprise: CrashPlan PROe (black branding). Cloud backup subscription typically billed by storage usage, also available from third parties.

The instructions and notes on this page apply to both versions of the Synology package.


CrashPlan is a Java application which can be difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created several others. It has been through many versions since that time, as the changelog below shows. Although it used to work on Synology products with ARM and PowerPC CPUs, it unfortunately became Intel-only in October 2016 due to Code 42 Software adding a reliance on some proprietary libraries.

Licence compliance is another challenge – Code 42’s EULA prohibits redistribution. I had to make the Synology package use the regular CrashPlan for Linux download (after the end user agrees to the Code 42 EULA). I then had to write my own script to extract this archive and mimic the Code 42 installer behaviour, but without the interactive prompts of the original.


Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package, and an Intel CPU is required.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 8 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • To install CrashPlan PRO you will first need to log into the Admin Console and download the Linux App from the App Download section and also place this in the ‘public’ shared folder on your NAS.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java or CrashPlan PRO app files recognised by this package, try downloading them with Firefox. It seems to be the only web browser that doesn’t try to uncompress the files, or rename them without warning. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This will configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • The Linux CrashPlan PRO client must be downloaded from the Admin Console and placed in the ‘public’ folder on your NAS in order to successfully install the Synology package.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. Run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding ( means the server is listening for connections on all interfaces) e.g.:

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address.
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    assuming that the Synology NAS has the IP
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following files:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
    /var/lib/crashplan/.ui_info (Synology) – this value does change spontaneously if there’s a port conflict e.g. you started two versions of the package concurrently (CrashPlan and CrashPlan PRO)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items


Migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO)

  • Leave the regular green branded CrashPlan 4.8.3 Synology package installed.
  • Go through the online migration using the link in the email notification you received from Code 42 on 22/08/2017. This seems to trigger the CrashPlan client to begin an update to 4.9 which will fail. It will also migrate your account onto a CrashPlan PRO server. The web page is likely to stall on the Migrating step, but no matter. The process is meant to take you to the store but it seems to be quite flakey. If you see the store page with a $0.00 amount in the basket, this has correctly referred you for the introductory offer. Apparently the $9.99 price thereafter shown on that screen is a mistake and the correct price of $2.50 is shown on a later screen in the process I think. Enter your credit card details and check out if you can. If not, continue.
  • Log into the CrashPlan PRO Admin Console as per these instructions, and download the CrashPlan PRO 4.9 client for Linux, and the 4.9 client for your remote console computer. Ignore the red message in the bottom left of the Admin Console about registering, and do not sign up for the free trial. Preferably use Firefox for the Linux version download – most of the other web browsers will try to unpack the .tgz archive, which you do not want to happen.
  • Configure the CrashPlan PRO 4.9 client on your computer to connect to your Syno as per the usual instructions on this blog post.
  • Put the downloaded Linux CrashPlan PRO 4.9 client .tgz file in the ‘public’ shared folder on your NAS. The package will no longer download this automatically as it did in previous versions.
  • From the Community section of DSM Package Center, install the CrashPlan PRO 4.9 package concurrently with your existing CrashPlan 4.8.3 Syno package.
  • This will stop the CrashPlan package and automatically import its configuration. Notice that it will also backup your old CrashPlan .identity file and leave it in the ‘public’ shared folder, just in case something goes wrong.
  • Start the CrashPlan PRO Synology package, and connect your CrashPlan PRO console from your computer.
  • You should see your protected folders as usual. At first mine reported something like “insufficient device licences”, but the next time I started up it changed to “subscription expired”.
  • Uninstall the CrashPlan 4.8.3 Synology package, this is no longer required.
  • At this point if the store referral didn’t work in the second step, you need to sign into the Admin Console. While signed in, navigate to this link which I was given by Code 42 support. If it works, you should see a store page with some blue font text and a $0.00 basket value. If it didn’t work you will get bounced to the Consumer Next Steps webpage: “Important Changes to CrashPlan for Home” – the one with the video of the CEO explaining the situation. I had to do this a few times before it worked. Once the store referral link worked and I had confirmed my payment details my CrashPlan PRO client immediately started working. Enjoy!


  • The package uses the intact CrashPlan installer directly from Code 42 Software, following acceptance of its EULA. I am complying with the directive that no one redistributes it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.


#--------CRASHPLAN installer script
#--------package maintained at

[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.*_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  [ "${WIZARD_VER_483}" == "true" ] && { CPPROE_VER="4.8.3"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER=""
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "rtd1296" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BUILD="Linux x64"
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile

pre_checks ()
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from" >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from" >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1

preinst ()
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
  exit 0

postinst ()
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  #extract cpio archive
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  #save install variables which Crashplan expects its own installer script to create
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  #remove temp files
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  #are we transitioning an existing CrashPlan account to CrashPlan For Small Business?
  if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ]; then
    if [ -e /var/packages/CrashPlan/scripts/start-stop-status ]; then
      /var/packages/CrashPlan/scripts/start-stop-status stop
      cp /var/lib/crashplan/.identity ${PUBLIC_FOLDER}/crashplan-identity.bak
      cp -R /var/packages/CrashPlan/target/conf/ ${OPTDIR}/

  exit 0

preuninst ()
  `dirname $0`/stop-start-status stop

  exit 0

postuninst ()
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  [ -e ${OPTDIR}/lib/ ] && rm ${OPTDIR}/lib/

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/ ]; then
    [ -L /lib/ ] && rm /lib/

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null

 exit 0

preupgrade ()
  `dirname $0`/stop-start-status stop
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then

  exit 0

postupgrade ()
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  exit 0


#--------CRASHPLAN start-stop-status script
#--------package maintained at

TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile

start_daemon ()
  #check persistent variables from syno_package.vars
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to from because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/ ] && cp -f ${OPTDIR}/bin/ ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/ ] && cp -f ${OPTDIR}/bin/ ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/

  #create or repair symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/ ]; then
    if [ ! -e /lib/ ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/ ] && rm /lib/
      ln -s ${OPTDIR}/lib/ /lib/

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
  elif [ $RAM -le 256 ]; then
  elif [ $RAM -le 512 ]; then
  elif [ $RAM -le 1024 ]; then
  elif [ $RAM -gt 1024 ]; then
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1

stop_daemon ()
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/

daemon_status ()
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
  rm -f ${PID_FILE}
  return 1

wait_for_status ()
  while [ ${counter} -gt 0 ]; do
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  return 1

case $1 in
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
      echo Starting ${DNAME} ...
      exit $?

    if daemon_status; then
      echo Stopping ${DNAME} ...
      exit $?
      echo ${DNAME} is not running
      exit 0

    exit $?

    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
      echo ${DNAME} is not running
      exit 1

    echo "${DLOG}"
    exit 0

    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1


install_uifile & upgrade_uifile

    "step_title": "Client Version Selection",
    "items": [
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
            "key": "WIZARD_VER_483",
            "desc": "4.8.3",
            "defaultValue": true
          },          {
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": false
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
            "key": "WIZARD_VER_3614",
            "desc": "",
            "defaultValue": false
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
    "step_title": "Java Runtime Environment Selection",
    "items": [
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true


  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0047 30/Oct/17 – Updated dedicated Java version to 8 update 151, added support for additional Intel CPUs in x18 Synology products.
  • 0046 26/Aug/17 – Updated to CrashPlan PRO 4.9, added support for migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO). Please read the Migration section on this page for instructions.
  • 0045 02/Aug/17 – Updated to CrashPlan 4.8.3, updated dedicated Java version to 8 update 144
  • 0044 21/Jan/17 – Updated dedicated Java version to 8 update 121
  • 0043 07/Jan/17 – Updated dedicated Java version to 8 update 111, added support for Intel Broadwell and Grantley CPUs
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required taken from Debian JNA 3.2.7 package with dependency on newer (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from to with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release

6,670 thoughts on “CrashPlan packages for Synology NAS

  1. frillen

    Thank you seems to work perfect.

    I already have 400gb uploaded to my crashplan account. All 400gb is located on my DS209 – but I used crashplan on Mac Pro to upload.

    In this way I had to

    1. have one computer open to upload files
    2. worry about my wifi connection on my mac book. I dont have the best connection all over the house, so when Im in the kitchen I did upload with 400kbs near my router 850kbs.

    I say thanks.

    I just did adopt my old profile. It seems to work.

    I’ll let you know!

    1. patters Post author

      Great, thanks for the feedback and glad it’s all worked out. I was going nuts yesterday trying to debug my scripts! Just couldn’t accept defeat :)

      1. frillen

        The adopting process seems to work great.

        Here you have a screendump of the adopt process – seems to take a day or so but at the moment it work on a fair speed – even then I only have a 1mb upload connection.

        The missing folders is the “old” folders. The one a the top is the new ones.

        I report back when it is done….

      2. patters Post author

        I guess it must just be re-checksumming all the files, and then maybe re-encrypting them at the CrashPlan Central end with the new key the NAS is using. How’s the NAS CPU use?

  2. Adam

    Thanks for the package, loaded up nice and easy and your steps to connect via the desktop client worked perfectly

    Hoping you may be able to help with something, I have the package running obviously and everything is peachy there and my NAS is backing up to the Crashplan cloud

    What I have then attempted to do is run crashplan on my desktop to backup files to my NAS ie the backup process will work “folder A on desktop”>NAS>Cloud. The desktop client sees the NAS as an available “computer” and I can start the backup but then the client and the NAS client report the following error:

    “Destination unavailable – backup location is not accessible”

    I have checked that the “crashplan” account has permissions to the backup directory, I have also tried changing to an alternate backup location without success

    What is interesting is that the client says that the connection has been up for 45mins, so it seems to definitely be connecting to the NAS, which makes it really seem permission related etc

    Any ideas?

    1. patters Post author

      Sorry, not really sure. The log which I displayed in the package’s Log tab is actually the activity history. You could also try looking at the two engine logs, which you’ll need to use an SSH session to see. They are:

      I’m also not sure how the backup engine will cope with having a smaller than standard RAM allocation. I use a 256MB system (192MB for Java), and it could be that a 128MB syno (80MB for Java) is just not enough.

    2. vomesh

      I encountered the same error but was able to fix it. Once I created the “crashplan” share and restarted the CrashPaln app on the NAS it auto created the “backupArchives” as documented above.

      However, for me when trying to use that as the path for sharing I get the error you got. I simply used the CrashPaln gui to configure the NAS to point to the root of “/volume1/crashplan”. Once I made this change I was able to connect to the NAS from other computers and begin backups with no errors.

      I will say my fix may not be the most graceful and can probably be fixed in some other way. I am just happy to have a working solution for me and hope it helps others. Patters I greatly thank you for making this package.

  3. Aidan

    Really pleased that you have created a package! Thanks for all your efforts.

    It should make this so much easier to install in future

  4. Jeppe


    Great to see a package!

    What if the Crashplan client gets an upgrade?

    Will it be enough to just remove and re-install the package?

    Best regards

    1. patters Post author

      Yes, that should be all that’s required, then you’d use the Adopt Computer option in the client.

      Did any of you guys use the Intel package? I still have no idea if it works…

      1. Marc

        How often does CrashPlan push a client upgrade and how would we know that this has occurred so that we can go through the re-install + adopt process?

        PS – This is awesome! thank you for sharing.

      2. patters Post author

        It seems to have been on 3.0.3 for a while now. I think I read somewhere that they might not instigate auto-updates of headless engine installs since they had a lot of support requests to deal with last time they tried that. When a new one comes out, let me know on here and I’ll have to update the package.

      3. Kais

        The engine did not accept my login so I reinstalled it on the crashplan.
        It then updated to version 3.15.2012 and stopped.
        I cannot make it work again.
        Can you help?
        By the way thanks for the sharing and instructions, they are great!

  5. Chuck

    Wow, this may be exactly what is tipping me back to buying a Synology. Since you are running the DS111, I guess I can then assume that this will work on the DS411 (since it is supposed to be the same processor, just with more memory). Does anyone know if it will run on the DS411j, despite the memory being less? I thought about getting the DS411+II but then decided it was too expensive for my taste considering I am already going to buy a mac mini server to replace my hackintosh. I was going to buy an external RAID box, but then it becomes convoluted since the choices for RAID boxes aren’t great… and the mac now has a thunderbolt connector which has not yet seen wide adoption.
    Anyway… thanks for doing this! I personally think Synology and/or Crashplan should just partner on this… it makes for a killer home solution!

    1. patters Post author

      There’s a lot of cool stuff to run on these Synology products now, but RAM is the biggest limiting factor. You often have to choose which packages are active. I’d avoid the J series for that reason. I wish there was a DIMM slot in there, or at the very least some pads we could DIY solder more RAM onto…

  6. spiderv6

    Got it running on my 1511+

    Install instructions were perfect – no issues.

    14.9GB on its way to Crashplan right now!

    Thanks so much!

      1. Rob

        I don’t know if there have been any updates to break the critical links, or if I’m doing something very simple wrong.

        I have a DS1511+ with DSM up to date (4.2) and I added Java SE for Embedded 6 (1.6.0_38-0017) per your package without incident, but when it comes time to install CrashPlan 3.5.2-0021, it tells me that “a shared folder called ‘public’ could not be found…” Obviously I created the shared folder using the control panel using lower case letters and set every privilege option to read/write (btw: java worked with the download saved there).

        I appreciate any help, as this is very frustrating.

        — Rob

      2. Rob

        Thank you for the help and quick reply. I disabled and re-enabled the windows file service from Control Panel…Win/Mac/NFS. Tried it several times with many reboots and no change in CrashPlan install behavior.

        — Rob

      3. Rob

        Unfortunately, I still haven’t had any luck installing your package. I was wondering if you might have any other suggestions. I appreciate any help you could provide. Thank you.

        — Rob

      4. patters Post author

        That looks fine to me. Try running this (which is how my CrashPlan script locates the public folder):
        cat /usr/syno/etc/smb.conf | sed -r '/\/public$/!d;s/^.*path=(\/volume[0-9]{1,4}\/public).*$/\1/'

      5. Rob

        It doesn’t appear to do anything. No error report, no response.

        I’ve seen similar behavior trying to install other packages. I’ll get to a certain point in a tutorial, and some steps just don’t seem to have any effect.

        Thanks again for your patience. There’s probably something simple, but I’m too much of a linux noob to really troubleshoot the problem. I guess I only know enough to get myself in trouble. So far, my failure record has included. 1.) Manual install of CrashPlan, 2.) compile/install TVHeadend 3.4, 3.) compile install HDHomeRun drivers, and 4.) trying to set up a sqlserver to host a common XBMC library database for several clients.

        Take care.

      6. patters Post author

        Hmm. There’s definitely something different about your NAS – can you try that same command but swap sed for /bin/sed

      7. Rob

        Ok, So… I tried that, and upon executing the bootstrap script, it returned: “cd: line 5: can’t cd to bootstrap”

        BTW: My /root/.public file has the “PATH=…” and “export PATH” lines commented out. Could that be a reason it can’t execute system commands?


      8. patters Post author

        Ok – uncomment the ‘PATH=’ and ‘export PATH’ lines at the top. Restart your NAS, then uninstall and reinstall the Java package. That should fix the missing timezone part at the bottom (‘TZ=’). Then you should be ok.

      9. Rob


        Here’s what worked…
        – I un-commented the “Path” and “export Path” lines in /root/.profile and /etc/profile .
        — That allowed your search command to work in ssh, but the install package still couldn’t find the public folder.
        — It also allowed me to uninstall the Bootstrap which I couldn’t previously do.
        – I then uninstalled java SE, but when I went to reinstall it told me that it was still installed at /volume1/@appstore/java6 . Of course that folder didn’t exist, but I found that this path was being called out in the two profile files
        — So, I commented out the java lines in those files, and got java to reinstall
        – Lastly, your package worked perfectly, as designed, at last.

        Thanks again for all your help. You’ve been incredibly patient.

  7. Diaoul

    @patters: Do you think you can make your packages available in spksrc ?

    I understand that what you’re doing is mostly packaging and not cross compilation but spksrc aims to provide a unified way to build SPKs as well as SPK’s source code versioning.

    Also, how have you managed to read all the licensing stuff of Java? Is that allowed by this license to distribute java SPKs?

    1. patters Post author

      Hi, I’ll take a look at that but it looks like a significant time investment to re-do everything so I don’t think it’s likely I’m afraid.
      For Java, there shouldn’t be any licensing issue because I’m not distributing a single byte of the JRE, only scripts to install it. The user has to provide it, and to do so they must independently agree to the Oracle licence themselves.

      1. Diaoul

        Ok nevermind, I think i’ll just look in your code and see what I can grab from there that would fit in spksrc (and put credits of course)

        In case you make other SPKs, please consider using spksrc for that. This is a very easy to use framework.
        For example to add a package : (PLIST being generated automatically but yet require some manual little changes)
        To compile it just “cd cross/lame && make ARCH=88f6281” (everything is handle automatically, from toolchain download to built binary)

        And a sample commit that adds a SPK :
        “cd spk/mpd && make ARCH=88f6281” build automatically an installable SPK

        Don’t hesitate to contact me (email or GitHub) in case you wish to contribute :)

      2. patters Post author

        Thanks. Can I ask that you please don’t make alternate versions of the packages I’ve already done, unless of course I give up maintaining them in future, as that would get pretty confusing for people. As you can see, I’ve tended to focus on Java apps. It makes sense to keep these on the same repo as the Java spks themselves.

      3. frillen

        Agree, why change any thing. This work. I cant be more simple? You add patters url to DSM and then the installation is just one click away.

        So, nice :)

        And thanks

  8. eff_cee

    Thanks for the guide – I’ve followed it and crashplan is running OK on my ds110j. My problem is I cannot connect to it using the desktop client on another machine. From your notes you should simply be able to change the IP address in the ui.config file – is this all you did ? Did you have to setup the ssh tunnel too ?

    When I do that and start the desktop client – it never manages to connect to the sevice on the syno. I’ve validated that the service on the syno is listening on the default port.

    Anything I’ve missed ?

      1. Fraser

        Hi patters, yes I did and tried rebooting too. I’m still confused re the ssh tunneling thing, do I need to do that to connect the desktop client to syno service to be able to configure it ? Or should I simply be able to change the IP address as per your guide ?

      2. patters Post author

        The tunnelling thing is unnecessarily complicated if you’re on the same network as the syno. You should simply be able to make the edit to that file on the client. It is possible though, that the 80MB RAM allocation that is necessary on the J series NAS is just not enough (CrashPlan is designed to use 512MB). Perhaps someone else can comment if they have it running on a J series.

      3. Fraser

        Well, got it working finally, but I still had to use the ssh tunnelling method.

        Without the ssh tunnel, this is what happens when I test with telnet using Win 7 64 pc (where desktop client installed):

        telnet ipadd-of-syno 4242 gives me “connection refused”

        telnet ipadd-of-syno 4243 I do get a connection but with loads of strange chars of which I can make out only DHPublicKeyMessage

        Would be nice to get it working without the ssh tunnel :-)

    1. patters Post author

      Yep, that was all taken care of in my Java package right from the beginning. That’s why you have to fetch the syno toolchain for it.

  9. Guillou

    My Synology DS211J runs actually crashplan.
    About 60MB of RAM and 70% of CPU charge for the whole NAS.
    In crashplan, CPU usage is set to 70% when NAS is idle otherwise 10%.

  10. Fragglesnot


    If I wanted to uninstall the crashplan that I have on my DS (installed manually via wiki instructions), and use your package instead – do you know how I would do that? I followed the steps not really knowing Linux or what I was doing, and I’m not sure how to remove it cleanly.

    Thanks if you can provide any help.

    1. patters

      Sure. Here’s the recipe for that:

      cd /tmp
      #we need to download the installer bundle to get the uninstall script
      wget "${DL_FOLDER}/${DL_FILE}"
      gunzip "${DL_FILE}"
      tar xvf CrashPlan_3.0.3_Linux.tar
      rm CrashPlan_3.0.3_Linux.tar
      cd CrashPlan-install
      #move any friends' backup data to where the package will find it
      [ -d /volume1/crashplan ] || mkdir /volume1/crashplan
      [ -d /opt/crashplan/backupArchives ] && mv /opt/crashplan/backupArchives /volume1/crashplan
      #fix up the uninstall script so it actually runs
      sed -i "s%^#!/bin/bash%#!/opt/bin/bash%"
      ./ -i /opt/crashplan
      cd /tmp
      rm -r CrashPlan-install
      [ -e /usr/syno/etc/rc.d/ ] && rm /usr/syno/etc/rc.d/
      [ -e /etc/init.d/crashplan ] && rm /etc/init.d/crashplan
      1. Fragglesnot


        That worked a treat. Thanks for walking me through that. The adoption process went seamless as well! All I had to do was chown my old friend archives.

        Thanks again for the hard work.

      2. Ray

        Thanks for all the hard work! When I try to run this script to start clean, I get to./ -i /opt/crashplan and I get the following error:
        who: invalid option — r
        BusyBox v1.16.1 (2012-03-07 15:47:21 CST) multi-call binary.

        Usage: who [-a]

        Show who is logged on

        -a show all

        ERROR: cpio not found and is required for uninstall. Exiting

        Any ideas?

      3. patters Post author

        Run this first, then try again:
        export PATH=/opt/bin:/opt/sbin:$PATH

        There are some other steps required though. Search this page from the top for the word “recipe”. I wish the author of that Synology wiki would (a) allow people to contribute, (b) demonstrate how to undo the steps, and (c) mention my package so people can make an informed choice before they set about making complex changes to their syno!

      4. Flavio

        Hi patters,
        I have uninstalled my crashplan via your recipe, it works perfectly!
        I just faced an issue on moving the friends’ backup data.
        All my computers backup to Synology, and it was like this for more than 6 months already, so you can imagine the amount of data I had on the /opt/crashplan/backupArchives.
        It was huge!
        And the mv command did not work since it first copies all the data to the destination, and only after that it deletes the source. In my case you can imagine what happend… I did not have the enough
        space for duplicating all the data.
        So I started hunting a way to do that in a more “clever” way, and I found out that the rsync command is much better for that.
        So I used the following to move the data, and I suggest everyone to do the same, since not only moves and deletes, but also keeps all the file permissions, properties (including dates)

        rsync -av –remove-source-files –ignore-existing –stats –progress /opt/crashplan/backupArchives/ /volume1/crashplan/backupArchives

        Flavio Endo

      5. Erik F.

        So i’ve just copied and pasted the exact commands here minus the #tags and I’m getting -ash: ./ not found

        any ideas?

      6. patters Post author

        That was written a long time ago. I don’t really know you may have installed CrashPlan manually, or even which version. I would ask the question on whatever guide you followed.

  11. Razvi

    On my DS209 the serviceHost keeps reseting to Do you have any ideea why ?
    I use CrashPlan on a x86 gentoo linux server and it works. On DS209 it seems like CrashPlan is overwriting my.service.xml when it starts.

    1. patters Post author

      Did you copy an existing config file over or anything like that? If you did you’d have to remember to reset ownership for the crashplan user:
      chown -R crashplan /volume1/@appstore/CrashPlan

      Take a look at the engine log files I mentioned in the last point of the Notes section of the post, they’re normally pretty explicit if there’s a problem.

  12. Richard

    Trying to install Crashplan on DS212. I have downloaded and installed the package and it tells me that the service is running. Also amended the ui.config file on the client to point to the IP address of the NAS.

    When I try to connect, it tells me “Unable to connect to the backup engine, retry?”

    I have stopped and restarted the service and even rebooted the NAS, but to no avail. Has anyone else managed to get it to work on this model?

    Any help or advice would be greatly appreciated.

    1. patters Post author

      Can you look at the file /volume1/@appstore/CrashPlan/conf/my.service.xml and check the value inside the tag? It should be but someone else on here reported that under certain circumstances it’s being reset to (which would prevent you connecting from another computer). You’ll need to connect via SSH to do this – I recommend copying the file to your public share then viewing it on your computer if you’re unfamiliar with Linux:
      cp /volume1/@appstore/CrashPlan/conf/my.service.xml /volume1/public

      1. Richard

        Thank you for the reply- you were correct in that servicehost was set to I set it to and copied the file back, but still no joy I’m afraid. There are no messages in the service log, apart from the one that says the service was started.

        Any more ideas?

      2. patters Post author

        Take another look at the xml – CrashPlan may have set it back to how it was. Failing that, try connecting using the SSH tunnel method.

      3. Richard

        Rechecked the xml file and it was still OK, so tried the SSH tunnel method and bingo – connected perfectly!!!

        Music and pics on their way to CrashPlan as I write this!!!

        Thank you so much for your help. It is very much appreciated!!!!

      4. Richard

        New problem – after rebooting the NAS or stopping or restarting the service, the backup does not restart. It seems to loose all of the configuration.

        When connecting the client, I am presented with the setup screen to either enter my name and email or login with my Crashplan account ID. After doing so, all the settings have been reset and I have to “adopt” my NAS in order for the backup to start.

        Is there a way around this? Sorry, but I am a complete Linux novice, so no idea where to start!

      5. Richard


        After reading the later posts, I tried enabling the User Home Service, as suggested. After that, restart the CrashPlan service, reconfigure the backup in the client program.

        Subsequently, restarting the service or NAS automatically starts the backup again!!

  13. Román

    Another one with problems with the connection from a remote client…

    I changed the my.service.xml manually to, and every time I start the service or restart the synology it is changed automatically to again. If I go to /volume1/@appstore/CrashPlan/bin and execute ./CrashPlanEngine start it works!! I think it is a problem with the package, not with the binaries. I can observe in the engine_output.log this:

    [02.04.12 15:12:53.330 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.330 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.331 INFO main com.backup42.service.CPService.main ] STARTED CrashPlanService
    [02.04.12 15:12:53.348 INFO main com.backup42.service.CPService.main ] CPVERSION = 3.0.3 - 1300223300091 (2011-03-15T21:08:20:091+0000)
    [02.04.12 15:12:53.348 INFO main com.backup42.service.CPService.main ] LOCALE = English
    [02.04.12 15:12:53.351 INFO main com.backup42.service.CPService.main ] ARGS = [ ]
    [02.04.12 15:12:53.351 INFO main com.backup42.service.CPService.main ] *************************************************************
    [02.04.12 15:12:53.638 INFO main com.backup42.service.CPService.start ] Adding shutdown hook.
    [02.04.12 15:12:53.642 INFO main om.backup42.service.CPService.copyCustom] BEGIN Copy Custom, waitForCustom=false
    [02.04.12 15:12:53.642 INFO main om.backup42.service.CPService.copyCustom] NOT waiting for custom skin to appear in custom or .Custom
    [02.04.12 15:12:53.643 INFO main om.backup42.service.CPService.copyCustom] No custom skin to copy from null
    [02.04.12 15:12:53.643 INFO main om.backup42.service.CPService.copyCustom] END Copy Custom
    [02.04.12 15:12:53.671 INFO main om.backup42.service.CPService.loadConfig] BEGIN Loading Configuration
    [02.04.12 15:12:53.838 INFO main ackup42.common.config.ServiceConfig.load] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    [02.04.12 15:12:54.384 INFO main ackup42.common.config.ServiceConfig.load] Loading from my xml file=conf/my.service.xml
    [02.04.12 15:12:54.602 INFO main ackup42.common.config.ServiceConfig.load] Loading ServiceConfig, newInstall=false, version=2, configDateMs=1328361186324, installVersion=1300223300091
    [02.04.12 15:12:54.627 INFO main ackup42.common.config.ServiceConfig.load] OS = Linux
    [02.04.12 15:12:55.007 INFO main om.backup42.service.CPService.loadConfig] AuthorityLocation@28591825[, hideAddress=false ]
    [02.04.12 15:12:55.008 INFO main om.backup42.service.CPService.loadConfig] END Loading Configuration
    DELETED file=conf/service.model
    DELETED file=conf/service.login
    FAILED to delete file=/volume1/@appstore/CrashPlan/confimport_key
    FAILED to delete file=conf/service.copier
    CACHE DELETED cacheDir=/volume1/@appstore/CrashPlan/cache
    [02.04.12 15:12:56.740 INFO main om.backup42.service.CPService.loadConfig] BEGIN Loading Configuration
    [02.04.12 15:12:56.741 INFO main ice.CpsFoldersDeprecated.moveConfigFiles] CpsFoldersMigrate is not necessary. /volume1/@appstore/CrashPlan/conf/my.service.xml file does not exists.
    [02.04.12 15:12:56.742 INFO main ackup42.common.config.ServiceConfig.load] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    [02.04.12 15:12:56.819 INFO main ackup42.common.config.ServiceConfig.load] Loading ServiceConfig, newInstall=true, version=2, configDateMs=null, installVersion=1300223300091
    [02.04.12 15:12:56.820 INFO main ackup42.common.config.ServiceConfig.load] OS = Linux
    [02.04.12 15:12:56.820 INFO main ackup42.common.config.ServiceConfig.load] Initializing backup paths last modified to now. lastModified=1
    [02.04.12 15:12:56.939 INFO main om.backup42.service.CPService.loadConfig] AuthorityLocation@19658898[, hideAddress=false ]
    [02.04.12 15:12:57.062 INFO main om.backup42.service.CPService.loadConfig] END Loading Configuration
    jtux Loaded.

    It loads the files correctly and then it checks for the config files again, and do a new install!!

    Someone knows how to fix it?

    Thank you for your work!

    1. patters Post author

      This was the same issue I was getting when I tried to get in place upgrading of the package working. Every time it would claim the my.service.xml file didn’t exist and overwrite it with a new default one.
      Can you try running these two commands and check the xml file to see if the changes happened as expected?

      sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /volume1/@appstore/CrashPlan/conf/my.service.xml
      sed -i "s%.*%/volume1/crashplan/backupArchives/%" /volume1/@appstore/CrashPlan/conf/my.service.xml

      Did the folder /volume1/crashplan/backupArchives get created ok? Maybe that’s the issue.
      Or maybe your system has a different version of sed that’s corrupting the XML somehow.

      1. Román

        The folder is created:

        ls -ld /volume1/crashplan/backupArchives
        drwxr-xr-x 2 root root 4096 Feb 4 12:58 /volume1/crashplan/backupArchives

        When I executed the sed commands you send me, it corrupts my config file!! :(

        cat /volume1/@appstore/CrashPlan/conf/my.service.xml

        I’ve version 3.2 build 1955, which is the latest version available from the website. How can I correct this??

        Thank you very much for your help!

      2. Román

        Perfect!! I have the solution. I’ve been parsing the start and stop package scripts, and I have found a error. When you create the user crashplan, in the /etc/passwd file you use the csh shell and a home directory that not exist, so when you execute start and do the command su – crashplan -c “…” it fails and the script doesn’t return 0.

        This is the solution I’ve found:
        1.- Edit the /etc/passwd file and change the crash plan user line to this:
        crashplan:x:1031:100:CrashPlan daemon user:/volume1/@appstore/CrashPlan:/bin/sh

        2.- Execute
        rm /volume1/@appstore/CrashPlan/syno-marker.txt
        in order to change the my.service.xml fle on the next run.

        Stop and start the package from the web interface.

        Working flawlessly! Thank you very much for this package. Great work!

      3. patters Post author

        The script launches su with “-s /bin/sh” which overrides the shell setting in /etc/passwd anyway, so there’s clearly something quite different about your system (is it bootstrapped, and maybe you’ve got alternate versions of some of the core binaries on there?). Can you try running those sed commands again, but using /bin/sed? Is your su binary different perhaps. Maybe you need to use /bin/su or whatever the default syno one is. I know you have a workaround, but I’d like to discover the cause so the next version won’t have this issue.

      4. Román

        Ok, I will give you more indications about the entire process. I use a DS1010+ with sinology DSM 3.2 1955. It is not bootstrapped. Only installed VPN Server as a Synology package. I downloaded the DSM 3.2 toolchain needed for your Java package and renamed it in public folder to the name that your script needs. Java installed correctly.
        Then I installed your CrashPlan package and it installed correctly.

        To make it work, I executed “cat /usr/local/etc/rc.d/” and executed the commands inside one by one.

        sed -i “s/127\.0\.0\.1/0\.0\.0\.0/” ${SYNOPKG_PKGDEST}/conf/my.service.xml and “sed -i “s%.*%/volume1/crashplan/backupArchives/%” ${SYNOPKG_PKGDEST}/conf/my.service.xml”
        executed correctly and changed the file as expected, so it is not a problem with sed.

        The problem I found is that when I executed su – crashplan -s /bin/sh -c “${SYNOPKG_PKGDEST}/bin/CrashPlanEngine start”, the command failed saying that the home directory of the user does not exist, and the service didn’t start.

        So I changed the /etc/passwd file crashplan user home directory to a directory that exists and after that everything is working flawlessly. I restarted the service within DSM, rebooted the NAS and I can connect it every time without problems.

        I hope this helps you!

  14. Román

    Wait! I copied the text from the file, and the second sed is modified when I submit the form. That is why it didn’t work when you send me the first time! manifestPath text doesn’t appear.

    1. patters Post author

      Ok, well spotted – so WordPress is interpreting some parts of the command as a tag when we post them on here. On to the real problem…
      Do you have the ‘User Homes’ service enabled in DSM? Not sure what it’s called in other languages, but it’s the thing that gives each user their own home directory in /volume1/homes. It is required, but I forgot to mention that in the instructions. The error you posted mentions a home directory problem – not a shell problem. So perhaps that’s the issue.

      1. Román

        I dont think that this was the problem, because the original directory of the user was /var/system/…

      2. patters Post author

        Could you try removing the package, enabling User Homes and re-installing, then try?I’m not at home, and have several other packages which depend on User Homes so it’s difficult for me to test that.

  15. Marc

    I have it installed and the service is running on my DS212J. Hover when I install the client on a computer and modified the servicehost address to match my DS212J. However, when I run the client I get “Unable to connect to the backup engine. Retry?

    Any feedback as to what I should do?

    Thank you again!

    1. patters Post author

      Can you remove the package, enable the User Homes service in the User Control Panel, then re-install the package – and see if that fixes it?

      1. Román

        Tested and it works for me too!! Thank you very much for your support!

        I have another question. How can we give support to non US ASCII characters??

      2. patters Post author

        Great! Thanks, I’ve updated the installation instructions. Do non-US folders not get seen properly? They should do. My Java package installs the Linux locale support that Java requires (which is why you need to supply the toolchain). What does your syno display if you type locale?

      3. Marc

        If I go to Control Panel> User > I see a button that says “User Home”. If I click it I see a window that says Enable User Home Service (not Homes). Is this what you want me to check? If so, it was already checked.

        Not sure if this is helpful but I do see a folder named homes and in it is a folder named crashplan.

      4. patters Post author

        Yes that was what I meant. Hmm. So can you connect the CrashPlan client ok if you use the SSH method? What does the CrashPlan log tab show in Package Center – can you see that the engine started?

      5. Marc

        I tried the SSH method and when I attempt to telnet localhost 4200 I get “telnet: can’t connect to remove host ( Connection refused.

        Am I wrong to guess that this is a firewall issue or something is wrong with my file?

  16. Román

    locale command does not exist after the installation. I’ve followed the instructions on the sinology wiki:

    cd x86_64-linux-gnu/x86_64-linux-gnu
    cp sys-root/usr/bin/locale /volume1/@appstore/java6/jre/bin
    cp sys-root/usr/bin/localedef /volume1/@appstore/java6/jre/bin
    cp -r sys-root/usr/share/i18n /usr/share
    mkdir /usr/lib/locale
    localedef -f UTF-8 -i en_US en_US.UTF-8
    echo “LANG=en_US.UTF-8” >> /etc/profile
    echo “LC_ALL=en_US.UTF-8” >> /etc/profile
    echo “export LANG LC_ALL” >> /etc/profile

    I’ve changed the /opt/bin directory and copied the locale binaries to the java binaries directory because I don’t have my sync bootstrapped. ¿Maybe this is the problem? ¿/opt must exist after executing your package?

    I installed Java 6 package, because my synology is x86_64, and you said 7 version does not work with intel based NAS, is this correct?

    1. patters Post author

      I think the problem is that you used the DSM 3.2 toolchain and renamed it, not the 3.1 one as requested. If you use the 3.1 toolchain, everything is done for you by the Java package. I would suggest you back out the manual changes you made as per the wiki before you do it though.

      1. patters Post author

        Yes. It’s only extracting the two required locale binaries and the various locale definitions. Notice that the version numbers of glibc and gcc are the same for the 3.1 and 3.2 toolchains in any case. However, as you have demonstrated, the locations of those files inside the archives must have changed. I have updated the Java installation notes to make that clearer.

      2. Román

        Ok! Done it! Working well!

        Maybe the problem is that with the 3.1 Toolchain, 32 bit and 64 bit is included in the same tarball, because with the 3.2 Toolchain they are separated and I downloaded the 64 bit one. Are you using 32 bit always for the installation?

        Another question, when Synology releases a new firmware and I install it, it will be necessary to install everything again? The locale settings remain configured?

        Thank you again for your great help!

  17. Fragglesnot


    I had an issue where the service host in my.service.xml file got reset from to on it’s own – like previous users have mentioned. I had the user homes service enabled all the while. Simply changing it back to and restarting crash plan allowed remote management again – but do you have any ideas why this is getting changed? I’ve read through this thread, and I know it has come up – but I’m not clear on what causes that and how to prevent it. (Not positive, but it seems like a DS reboot is what causes it?) I could do more testing if you need more data.


  18. Chuck

    After nearly ordering a Synology, then changing my mind… this blog changed my mind and I ordered the DS411 (Marvell CPU 528mb RAM). I put two 1gb drives from my hackintosh in it and another 2gb drives I bought the day the NAS arrived. Since I bought this new based on the blog, it is a fresh system. I updated to the -1955 version of the system before starting anything. Here are a few things I noticed along the way:
    Make sure for the ARM to select this Java package (grabbed the wrong one and the script scolded me):

    I installed the Java package per patters blog, then installed crashplan. I started it using the checkbox, then stopped and restarted. Before I had a chance to see it not work, I saw the update posting around turning on home, so I removed crashplan, turned on home, reinstalled crashplan but this time turned it on after installation… stopped the package… then restarted the package.

    I then used a macbook to test crashplan using a free 30 day trial account and 600mb of test files. For those who don’t know, on a mac, the files is in the conf folder when you right click on the crashplan application and select “show package contents”. The conf folder is within the subfolders under java and you have to select “unlock” when you start to change the line as outlined by patters.

    Everything worked fine so I decided to use my real crashplan account. This actually turned out to be a royal pain. Everytime I tried to login using my paid account with the DS411 setup in the, the system would automatically ignore that I had completely deleted crashplan and application support from the computer and default to my free trial account. It turns out that I needed to go into the crashplan desktop software, select Destinations -> Computers, then click on the DiskStation and tell it to remove the Diskstation from that account. Once I did that, I could get it to remember the DiskStation. It turns out that crashplan (online) remembers the UUID of each computer and will automatically see the DiskStation. I imagine that this may also be the case for people who are trying to use another computer in certain circumstances, though the adoption process should take care of this. In any case, it is something to consider when troubleshooting… just be careful, though, since doing this can delete your entire archive in the cloud if you delete the wrong thing!

    I am now watching it analyze the disk station and hoping it remaps the files I have transferred to it to the files in the cloud.

    I can also confirm that I had restarted the DS411 and it maintained the setup.

    Some of this may be obvious to people doing this, but if it helps someone else, then it is worth the write up!

    At this point, I think that Synology should send patters a thank you as well! This tipped me back to buying the Synology! Thanks for this site!

    I do have one small question… I am now trying to decide what to do with backing up computers. I thought about Time Machine, but that seems somewhat challenged when it comes to crashplan since it results in far too much uploading unnecessarily. Has anyone thought of alternatives? I also use carbon copy cloner so that would seem like a viable alternative to Time Machine.

    1. Chuck

      PS: as patters noted… when downloading the files use Firefox on a Mac and avoid Safari. Safari auto expanded the files and I had to start over…

  19. Chuck

    Okay, so it seems to be working I think. At first I thought it analyzed my 1tb of information and then decided to upload everything again, but after looking at the screendump above, I believe it is working. The reason I say this is that it would appear to be uploading files, but when I look at the speed it is 54 Mbps. I only have a 5Mbps connection and remember it took several days to upload what I had before… this says it will be complete in 1.3 days. Does this sound about right to everyone?
    If this is the case, should I just leave the old things selected once it is done or will everything be remapped and the “missings” will just go away? It would be nice if there was a way to clean it up once done since I can imagine I will wonder what the missings are a few years from now!

    1. patters Post author

      I don’t know about this scenario in particular, but I’m guessing the CrashPlan engine will need to checksum all of your 1TB worth of local files, which I would assume will take a fair amount of time.

  20. DS411j


    I am trying to install this on my Synology DS411j. I have successfully installed Java 7, but when I try to install Crashplan I get the following error : “Java is not installed or not properly configured. The Java binary could not be located”. The Java package shows up in the “installed” packages (although its status is “stopped”, but I understand that is nomal.

    Any idea ?

      1. DS411j

        Thanks a lot ! Seems to work now (although I had also downloaded Java with Firefox on the first time)

        Adopting my previous profile now….

  21. schmeel

    I had a problem similar to others discussed here – every time I restarted CrashPlan on my Synology DS411, it came up as though it was a new computer. I ended up with 4 different computers on my CrashPlan Family account all with the same name. Only one (the original) had any data backed up.

    This was because User Homes was not enabled – CrashPlan could not save my GUID (computer identity) to /var/services/homes/crashplan/.crashplan/.identity. Each time it started, it thought it had to create a new GUID.

    Note – if you started off with User Homes disabled, you don’t need to reinstall CrashPlan; I was able to just do this:

    Enable User Homes (DSM Control Panel – User – “User Homes” button)
    Restart CrashPlan (restart the NAS, or use the CrashPlan commandline interface (
    Start the desktop client
    Log in and adopt you previous profile

    MANY THANKS for the excellent package, and I commend you for following the comments here and providing support and updates. It’s more than many commercial software vendors can offer.

    1. patters Post author

      Enabling user homes is in the instructions now so hopefully that’ll reduce the problem. Good spot on noticing that .identity file. I guess I will be able to make the package upgradable after all (for when CrashPlan release a new version).

  22. Ed

    Hi (again!),

    Another fantastic piece of work patters – installed no problem and currently backing up 120GB+ to crashplan.

    Considering I’m running this on a DS211j it is really managing (the limited) resources very well, much better than I thought it would!

    p.s. see my message on your java package post.



    1. patters Post author

      Thanks for the feedback. Donate button is at the bottom of the right-hand column.
      Since I enabled logging on the package repo at the end of Jan I’ve seen just under 4,000 unique IP addressed synos connecting from all over the world!

      1. Ed

        You’re welcome. Thats an amazing number of downloads in such a short time, I can certainly say you have saved many people many hours!

        I hope you keep the passion to share your creations. Enjoy the drink! :)

        p.s My poor 211j is not truely man enough for the job, think I’ll have to get a more beefy syno sometime soon! (13 days for 55GB!) :)

      2. patters Post author

        Thanks frillen! I didn’t want to make the button too in-your-face, and it looks a bit strange at the top of the page. I’ll experiment…

      3. Ed

        Apologies for spamming your blog!

        I just realised the incredibly slow upload speeds were due to the CrashPlan client throttling upload traffic not the DS211j.

        I can’t believe I missed it, alas if anyone has the same problem, in the CrashPlan client, go to Settings -> Network and set your WAN limits to what you deem acceptable.

        I removed the throttle altogether and now have 4.3Mbps opposed to 300Kbps upload.

  23. Jason

    I can’t thank you enough for this. It was so easy for a linux novice like me. Installed and working perfectly on my new ds212j.

    1. Jason

      I spoke a little too soon. I had to restart the NAS and the crashplan service didn’t restart with it. I had user home enabled and the identity was stored within it. I also used the 3.1 toolchain, not 3.2. After the first service restart, I could connect and manage the service with no issues. However, after the second restart, I could no longer connect.

      I found that, like others, the servicehost in my.service.xml was instead of I corrected that and now it works and restarts with no problems. I followed the instructions to the letter although I found that the crashplan/backupArchive folder didn’t have the permissions set correctly and the engine could not write to it. I am guessing there is still a thing or two missing from the scripts.

      Overall, however, I am very grateful to you for putting this together. I wouldn’t have been able to get this going otherwise. All seems to be working perfectly now after those two tweaks.

  24. DS411j


    I have restarted the NAS and like others I seem unable to reconnect from my computer (service is running on the NAS)… I am not sure how to use SSH to modify my.service.xml

    Can somebody explain ? THanks !

    1. DS411j

      Ok I managed to copy, modify, and copy the file back. But Crashplan is still unable to connect (it was connecting fine before the reboot). Any idea or suggestion ?

      1. Jason

        Once you copy the file back, you need to restart the crashplan package for the change to take effect.

      2. DS411j

        Even after stopping and restarting crashplan (or even rebooting the NAS), it still doesn’t work…

      3. DS411j

        Without changing anything, just trying several more times it finally connected… So all working now. Thanks

  25. Rod

    Thanks. Worked on my DS712+. Adopted from my old computer successfully.
    I have the same issue as others. When I reboot the DS712+ I can’t connect to Crashplan. I have to change service host to, stop/restart crashplan and then I can connect. Not a big deal because I don’t connect to Crashplan often so I’m not going to try ssh method.

  26. David Mc Nally

    I’m getting the following error in engine_error.log:

    Exception in thread “W5092156_ScanWrkr” java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager

    Backups don’t seem to run, however CrashPlan connects to the online service according to my account on Any suggestions as to what I could do?

      1. David Mc Nally

        Thanks for your quick reply. I’m running DSM 3.2-1958 on a DS212+/2.0 GHz ARM-processor.

  27. Nils

    Thank you very much for putting your energy in this! I have one user-invoked problem:

    I accidentally removed my NAS from the list of computers that showed up on the GUI (yes, I know that they ask you tif you are sure ;-) (and no, there were no back-up files on my NAS … yet). Somehow I am now unable to reconnect to the headless client on y NAS. I have no idea what I should do or change to make it work again. I tried removing and re-installing the package on my Synology, but the GUI keeps telling me that it is unable to connect. Before my mistake, everything appeared to work as intended …

    1. Nils

      Found it already … needed to uninstall te GUI, then re-install, point the GUI to the NAS after which the NAS will be re-registered as a Crashplan device …

      Now the only thing left is the ‘The Backup location is not accessible’ message. How to get rid of that?

      1. Nils

        Also fixed this one. Somehow the actual backup folder (51234… etc etc) that will hold the backup for a specific client, could not be created by the Crashplan client. I created it myself and now it works. Is this a known bug and (if so) is there a fix? (The root backup folder is /volume1/backup/crashplan. There all users have every permission possible … just to make sure)

      2. patters Post author

        That path should be /volume1/crashplan/backup. When you install, it’s created if it doesn’t already exist, and the owner is always set to be the crashplan daemon user.
        However, I haven’t tested that at all since I use the paid-for hosted option. Perhaps someone else can report their findings.

      3. Nils

        I tried the default (/volume1/crashplan/backup) first but that did not work. Therefore I switched to a folder of which I know that everyone had all access … And also there it did not work …

      4. Jason

        This may be related to the issue I found where the folder gets created but permissions to the folder are not set so that the engine can write to it. And just to be clear, the folder created on my system was /volume1/crashplan/backupArchives.

      5. patters Post author

        Well spotted Jason and Nils. Looks like I need to release an updated version of the package. I could have sworn that was in the postinst script, but I just checked and it’s not. Busy tonight, so it may take me a few days. I think the trouble is I get so many people commenting who have problems simply downloading the JRE file that I’m not noticing proper bug reports :)

        I’ll probably change it so it resets the bound interface from to on each start too (rather than just once).

      6. Jason

        I’m sure you saw my comments above regarding my original install Friday night, but in case you didn’t and since you will be releasing a new package, there still seems to be an issue with the servicehost configuration in my.service.xml. I was able to connect to the engine following the first restart as indicated in the instructions but was not able to following the second reboot. It appears something is causing the servicehost to change to at that point. I manually changed it to and it now survives reboot.

      7. patters Post author

        I’ll just have it force that setting every time. I’ve never hit that issue though, and my syno has restarted at least three times since I installed CrashPlan.

  28. Niek

    I would like to stop and start the Crashplan service with crontab. That doesn’t seem to be the problem but with my command the my.service.xml seems to be lost.

    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine stop
    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine start

    are the commands I am using in crontab. The start command seems to overwrite my existing config.

    Could you help me?

    1. patters Post author

      There is user-specific data, and I guess you’re running the cron job as root. Try using su (like my start-stop-status script does):
      su - crashplan -s /bin/sh -c "/volume1/@appstore/CrashPlan/bin/CrashPlanEngine start"

      1. Niek

        Thanks, that did the trick.
        Because Crashplan prevents my disks from hibernating (even when i set a backup time span, i see a proces scanning every 5 minutes) I use crontab to control the Crashplan service.

        If people want to know what I did:
        ssh to your Synology (ssh root@ip/hostname)
        vi /etc/crontab
        press i to activate insert mode

        Add the following lines:

        0       19      *       *       *       root    su - crashplan -s /bin/sh -c "/volume1/@appstore/CrashPlan/bin/CrashPlanEngine start"
        0       23      *       *       *       root    /volume1/@appstore/CrashPlan/bin/CrashPlanEngine stop

        Modify times, days if you want. Search Google for crontab if you want to know all options.

        press ESC when done editing.
        type :w to save the file. (:quit! if you want to cancel and quit)
        type :q to quit vi.

        Do the following command in your ssh session:
        /usr/syno/etc/rc.d/ stop
        /usr/syno/etc/rc.d/ start

        Crontab is restarted now and loaded your new config.

        Disks should sleep now (except the period Crashplan is running).

        Make sure the “Verify backup selection” time is in the timespan your Crashplan is running. Else it never gets verified.

        @Patters: The only (small) problem I see right now is Crashplan status is always stopped in Package Center when started by crontab.
        When i do ps I see: /volume1/@appstore/java7/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx80M -Dnetworkaddress.cache.ttl=3

        Service is running fine, just the status is wrong.

      2. patters Post author

        Good tip, thanks for sharing. As you no doubt discovered, the su remains necessary in /etc/crontab even though you might expect to be able to use the ‘crashplan’ user account directly, because new users created by script have a null shell by default. su lets us override that without needing to edit /etc/passwd.

        @Patters: The only (small) problem I see right now is Crashplan status is always stopped in Package Center when started by crontab.

        When a package is started in Package Center I think it records the state somewhere (I forget where) so that when you restart your syno the packages retain their states. When they’re started manually this doesn’t happen. I guess it ought to be possible to figure out what changes and add that to the cron job too.

      3. Niek

        Thanks for explaining that. It’s just cosmetic.
        I thought the status code grepped some info from ps but it’s not that simple :)

        Thanks again!

      4. patters Post author

        Well logically it should work how you were thinking (especially since I define the running state in the script start-stop-status) but it seems it doesn’t quite work like that in practice.

      5. patters Post author

        Got it! Package Center checks two separate things before a package’s status is reported as ‘Running’ in the UI. The status is first checked via the script start-stop-status, which would be ok for your cron job. But you also need to create a file in /var/packages/Crashplan called enabled. Delete this when your cron job stops the package.

  29. Richard

    Very happy with the easy install…
    I have one problem…

    The realtime scanning does not work.
    It only sees added or changed files when the “verify selection” makes a scan on the disks.
    On my MAC this does not happens, it rushes every-time a file is added or changed..

    I tried the command: backup.scan 42 -> this forces the check.. than it finds changed files for 1 time only..
    I tried rebooting the nas, does not solve problem
    I tried reinstalling the package -> does not solve the problem
    I tried reinstalling the NAS and all packages -> does not solve the problem

    Anybody an idea??


    1. Richard

      Diskstation 1812+ i386
      When checking the service.log I see the folowing error:

      code[03.05.12 16:59:03.221 WARNING W5369678_Authorizer  .BackupSetsManager.initFileWatcherDriver] BSM:: Exception initializing WatcherDriver - Platform is not supported, glibc: 2.3.6, kernel:^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@; BackupSetsManager[ scannerRunning = false, scanInProgress = false, fileQueueRunning = false, fileCheckInProgress = false, errorRescan=false ]                
      com.code42.exception.DebugException: BSM:: Exception initializing WatcherDriver - Platform is not supported, glibc: 2.3.6, kernel:^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@; BackupSetsManager[ scannerRunning = false, scanInProgress = false, fileQueueRunning = false, fileCheckInProgress = false, errorRescan=false ]                                                                         
              at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(                                                                                                                                                                                                                                                                                                                                                                                                      
              at com.code42.backup.path.BackupSetsManager.setUp(                                                                                                                                                                                                                                                                                                                                                                                                                      
              at com.code42.backup.BackupManager.setUp(                                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.backup.BackupController.setUp(                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.CPService.changeLicense(                                                                                                                                                                                                                                                                                                                                                                                                                               
              at com.backup42.service.CPService.authorize(                                                                                                                                                                                                                                                                                                                                                                                                                                   
              at com.backup42.service.peer.Authorizer.doWork(                                                                                                                                                                                                                                                                                                                                                                                                                                
      Caused by: Platform is not supported, glibc: 2.3.6, kernel:^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@                                                                                                                                                                                                                                                                                                 
              at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.(                                                                                                                                                                                                                                                                                                                                                                                                  
              at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(                                                                                                                                                                                                                                                                                                                                                                                                      
              ... 8 more
  30. Chuck

    I seemed to have no problem with the setup after restarting until today. I had a drive fail a week ago and added a new one and let it rebuild. I have also shut down the station after that to move it around my desk. When I went to add another folder using the the client on my MacBook, It could no longer connect to the engine. I noted like many others that was no longer so I followed directions above to SSH in and copy that file to the public folder so I could grab it onto my computer and fix it. I could not however figure out how to put the file back though. I tried just inverting the cp directions but that didn’t work. I could not get it back from public to the crashplan folder. I eventually uninstalled and reinstalled the package. Now the client sees it as a new computer so I guess I will have to adopt again (unless you have another idea). Does the install package fix this issue now so the XML does not revert (or is fixed when restarting)? I just recall the original adoption took days since I have over a terabyte on crashplan now.

    Thanks for the package and help!


  31. rgliese

    If you got Problem to connect the Client to you Syno, you have to add the port 4243 in your syno firewall.

  32. Lars

    Hi patters,

    will the script work with the new DSM 4.0 or should we stick with version 3? Also, are you planning converting the script to the new App feature in 4.0?

    Great work, already donated!

    Thanks, Lars

    1. patters Post author

      Hi Lars – thanks very much for the donation. These packages all work on DSM 4.0. According to commenters on here there were Package Center issues with the very first DSM 4.0 build, but build 2198 is fine. I would recommend that you remove and reinstall Java after the upgrade, otherwise you’ll lose locale support and therefore unicode support in Java.

  33. Ben

    This post has basically convinced me a Synology would work for my needs, since I can backup the NAS to Crashplan (my current system is an olde PC with FreeNAS).

    A couple performance questions:
    1) If the syno is set to deep hibernate during certain times, does the engine restart on wake and does the engine affect any scheduled hibernation? I don’t have a syno yet so I don’t exactly know how it deals with these things.
    2) If using WoL, should the engine be re-started manually?
    3) While the syno is uploading to Crashplan, are there any issues serving file shares (e.g., reading a mkv from a networked client)?


    1. patters Post author

      Hi, glad this is convincing people to buy Synology :)

      1) The Crashplan engine doesn’t seem to play nicely with the hibernation feature of the Synology – it just stays on all the time. Commenter Niek outlined a solution above to set a cron job to only run the engine in certain time windows to mitigate this.

      2) On bootup the Synology keeps packages in the same running states which were set in the Package Center UI.

      3) I haven’t tested this thoroughly, but Crashplan is invoked using nice so it should run with a very low priority. So in your scenario, the MKV should take precedence.

  34. DS411j

    All right… Everything was working fine until I updated to DSM 4.0. now Crashplan is running, I can connect from my desktop to my NAS; I can select the files I want to backup… But crashplan is stuck saying :

    waiting for backup
    To do : 0 files
    Completed: 0 files
    Last backup : initial backup not complete.

    Any idea ?

    1. DS411j

      I tried to reinstall java in case it was the error; and java installation fails. Am I the only one having this issue with DSM 4.0 final ?

      1. Richard

        Same problem here!! Just updated to DSM4.0. Crashplan shown as started but nothing being backed up. I decided to re-install the package, but after un-installing, the CrashPlan package is not shown in the list of available packaged, and the link to the source is coming up as invalid – HELP!!

    2. Marc

      I had the same problem. It appears that DSM 4.0 had a bug. It also stopped my email notifications from working. They issued an update 4.0-2198 today that immidately fixed the problem. I could not update directly from DSM. Instead I downloaded it from the synology website and the update worked.

  35. nullreturned

    Looks like the Package Repository is down. Just as was about to install some of your packages! Hope everything is okay and the repository can get back online.

      1. nullreturned

        This is a new NAS setup, so I was already updated to the newest software. I was connected to the repository during the day, but then at night when I was going to install the software, it couldn’t connect. I think there was actually an issue with the hosting, but it was fixed by morning.

    1. patters Post author

      Hi – the repo is hosted on a free hosting service ( which does seem to have the occasional outage. Whenever I’ve noticed it’s only been the hosting control panel that has been affected. I’ll keep an eye out in case it’s frequent.

  36. nullreturned

    Great package! I setup everything as directed, and it went smoothly on a 712+. I will admit I was a bit nervous when the Java reported failure and wouldn’t start as a service, but I pressed on. Once everything was said and done, I tried to connect via your IP method with no avail. I then followed an online guide to setting up tunneling via Putty, and it worked instantly. My systems are now backing up via Crashplan, and I’m really happy.

    Might be good to have the four steps outlined for the SSH Tunneling on the site for people who can’t get to it reliably through the IP Method. And with Putty, once you setup the connection, it only takes two seconds to get going!

    Thanks again!

  37. DS411j

    I finally managed to reinstall crashplan on the 4.0-2197 (no idea why it failed the first 3 times and worked the 4th…), and made it works after adopting my previous computer.

    BUT I am also using my NAS for the inbound backup of a friend. And it has disappeared ! For some reason it looks like crashplan is NOT backing up my friend’s computer to the default location (/volume1/crashplan/backupArchives) but to /volume1/@appstore/crashplan/backupArchives

    As a result I suspect it has deleted my friend’s backup when I reinstalled Crashplan. I therefore have 2 questions:

    1) is it normal that we have to reinstall crashplan after a firmware update ?
    2) Why is crashplan not backing up to the default folder, and (now that my friend has started to back up again) how can I move the backup to the default location ?

    1. patters Post author

      You guys that are getting the changes in conf/my.service.xml reset with each reboot (the listening IP changed back to, and the backupArchives location) – have you definitely installed the package after enabling the User Homes Service as per the instructions on this page?

      I have never had this happen, and I have installed/uninstalled/upgraded/rebooted the package around 50 times during testing, not to mention actually running it for its intended purpose.

  38. patters Post author

    Apologies for the delay responding but I was on holiday and offline for the whole of last week. I have just noticed that on ARM systems, the symlink for libffi is removed once you upgrade DSM to a new version. To reinstate it run:
    ln -s /volume1/@appstore/CrashPlan/lib/ /lib/

    I also spotted in service.log.0 a permission denied error on a config file I didn’t know about. Try also running:
    chown crashplan /var/lib/crashplan/.identity

    I’m not sure whether it’s important since its contents appear to be identical to /var/services/homes/crashplan/.crashplan/.identity which seems to be the actual one that gets loaded.

  39. Tony

    I’m having the problem on my DS1511. I tried everything mentioned in the above comments but it keeps resetting whenever the Crashplan module starts. I even tried write-protecting the my.service.xml file, changing its ownership to root, etc. but obviously the process that’s overwriting it also has elevated privileges, because I see the file disappear and reappear with the

    Using a SSH tunnel is inconvenient, so I hope a real fix for this is coming.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s