CrashPlan packages for Synology NAS

UPDATE – The instructions and notes on this page apply to all three versions of the package hosted on my repo: CrashPlan, CrashPlan PRO, and CrashPlan PROe.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient – it could even get stolen or destroyed and you would still have your data. Whilst you can pay a small monthly charge for a storage allocation in the Cloud, one neat feature CrashPlan offers is for individuals to collaboratively backup their important data to each other – for free! You could install CrashPlan on your laptop and have it continuously protecting your documents to your NAS, even whilst away from home.


CrashPlan is a Java application, and difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created a few others. As I started I used information from Kenneth Larsen’s blog post, the Vincesoft blog article for installing on ARM processor Iomega NAS units, and this handy PDF document which is a digest of all of them. I used the PowerPC binaries Christophe had compiled on his blog, so thanks go to him. I wanted make sure the package didn’t require the NAS to be bootstrapped, so I bundled any dependent binaries. Back in 2012 I didn’t know how to cross compile properly so I had to use versions I had found, but over the years I have had to compile my own versions of many of these binaries, especially as I added support for Synology’s huge proliferation of different CPU architectures.

UPDATE – For version 3.2 I also had to identify and then figure out how to compile Tim Macinta’s fast MD5 library, to fix the supplied on ARM systems (CrashPlan only distributes libraries for x86). I’m documenting that process here in case more libs are required in future versions. I identified it from the error message in log/engine_error.log and by running objdump -x I could see that the same Java_com_twmacinta_util_MD5_Transform_1native function mentioned in the error was present in the x86 lib but not in my compiled from W3C Libwww. I took the headers from an install of OpenJDK on a regular Ubuntu desktop. I then used the Linux x86 source from the download bundle on Tim’s website – the closest match – and compiled it directly on the syno using the command line from a comment in another version of that source:
gcc -O3 -shared -I/tmp/jdk_headers/include /tmp/fast-md5/src/lib/arch/linux_x86/MD5.c -o

Licence compliance is another challenge – Code 42 Software’s EULA prohibits redistribution of their work. I had to make the syno package download CrashPlan for Linux (after the end user agrees their EULA), then I had to write my own script to extract this archive and mimic their installer, since their installer is interactive


Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 7 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java archive recognised, try downloading it with a different web browser. Some browsers try to help by uncompressing the file, or renaming it without warning. I have tried to code around most of these behaviours. Use Firefox if all else fails – it seems to be the only browser that doesn’t interfere with the file. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This is configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
  • If you previously installed CrashPlan manually using the Synology Wiki, you can find uninstall instructions here.

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. You will need to edit the file conf/ in the CrashPlan folder on that computer so that this line:
    is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
    Mac OS X users can edit this file from the Terminal using:
    sudo nano /Applications/
    (use Ctrl-X to save changes and exit)
  • Starting with CrashPlan version 4.3.0 you will also need to run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding ( means the server is listening for connections on all interfaces) e.g.:

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address (this was a new requirement with CrashPlan version 4.4.1).
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    assuming that the Synology NAS has the IP
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following file:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items



  • The package downloads the CrashPlan installer directly from Code 42 Software, following acceptance of their EULA. I am complying with their wish that no one redistributes it.
  • Real-time backup does not work on PowerPC systems for some unknown reason, despite many attempts to cross compile libjna, and attempts to use binaries taken from various Debian distros (methods that work for the other supported CPU architectures).
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • The default location for saving friends’ backups is set to /volume1/crashplan/backupArchives (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    This is particularly useful on ARM systems because CrashPlan currently prevents hibernation while it is running (unresolved issue, reported to Code 42). Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.


#--------CRASHPLAN installer script
#--------package maintained at

[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.0_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.8.0_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER=""
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BUILD="Linux x64"
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile

pre_checks ()
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from" >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from" >> $SYNOPKG_TEMP_LOGFILE
      exit 1

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1

preinst ()
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
  exit 0

postinst ()
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  #extract cpio archive
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  #save install variables which Crashplan expects its own installer script to create
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  #remove temp files
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  exit 0

preuninst ()
  `dirname $0`/stop-start-status stop

  exit 0

postuninst ()
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  [ -e ${OPTDIR}/lib/ ] && rm ${OPTDIR}/lib/

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/ ]; then
    [ -L /lib/ ] && rm /lib/

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null

 exit 0

preupgrade ()
  `dirname $0`/stop-start-status stop
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then

  exit 0

postupgrade ()
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  exit 0


#--------CRASHPLAN start-stop-status script
#--------package maintained at

TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile

start_daemon ()
  #check persistent variables from syno_package.vars
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to from because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/ ] && cp -f ${OPTDIR}/bin/ ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/ ] && cp -f ${OPTDIR}/bin/ ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/

  #create or repair symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/ ]; then
    if [ ! -e /lib/ ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/ ] && rm /lib/
      ln -s ${OPTDIR}/lib/ /lib/

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
  elif [ $RAM -le 256 ]; then
  elif [ $RAM -le 512 ]; then
  elif [ $RAM -le 1024 ]; then
  elif [ $RAM -gt 1024 ]; then
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1

stop_daemon ()
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/

daemon_status ()
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
  rm -f ${PID_FILE}
  return 1

wait_for_status ()
  while [ ${counter} -gt 0 ]; do
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  return 1

case $1 in
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
      echo Starting ${DNAME} ...
      exit $?

    if daemon_status; then
      echo Stopping ${DNAME} ...
      exit $?
      echo ${DNAME} is not running
      exit 0

    exit $?

    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
      echo ${DNAME} is not running
      exit 1

    echo "${DLOG}"
    exit 0

    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1


install_uifile & upgrade_uifile

    "step_title": "Client Version Selection",
    "items": [
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": true
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
            "key": "WIZARD_VER_3614",
            "desc": "",
            "defaultValue": false
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
    "step_title": "Java Runtime Environment Selection",
    "items": [
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true


  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required taken from Debian JNA 3.2.7 package with dependency on newer (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from to with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release

5,823 thoughts on “CrashPlan packages for Synology NAS

  1. Martin Kleinman

    I also moved away from Crashplan towards iDrive. Going to test that for a while and if backup from my Synology works as expected I will move my other server to iDrive as well.

    No more Crashplan here for me.

    1. Singularity

      iDrive was awful… I have about 350gigs and it constantly disconnected, I’d have to manually restart it and it would start the whole process over so it would never finish. I hope you have better luck but the app sure seems like a super hackjob port..

    2. Gert Serneels

      Hi, I’m also testing out IDrive at the moment and everything works perfect!
      It uses very low resources, a lot less than Crashplan did. About 450k of memory and 10% CPU on my DS715, that’s while I’m backup up large video files!

      It’s kindly priced, $69.50 for 1TB. The first year you can get 75% off.
      The only drawback is if you need over 1TB of space, you have to upgrade to the 10TB plan what costs a lot more: $499.50

      You have allways access to your files, via the web or via the app.
      And, they support Synology!
      Anybody else with good or bad experiences with IDrive?

      1. Dirk Gaudaen

        I also moved to IDrive a few months ago. I also found out that the load on my Synology was much less than Crashplan. However, their package also suffers from a lot of bugs. I saw the package break several times after Synology updated DSM. This means waiting until an update is published.
        Currently, in the latest version, I detected several problems with the scheduled archiving option.
        I must say that their support is helpful in solving the problems. They have actually been debugging a few hours remotely on my system to find out what went wrong. The result will be a new patch to be released within the next weeks.
        A package directly supported by Synology would of course be the best.

  2. Sébastien T.

    As a non-intel Synology user (DS413) I have also decided to stop Crashplan and to give a try to Backblaze via Cloud Sync.

    All in all, it will be a relief because I was already so tired to struggle with Crashplan on every single update…

    Thank you a lot Patters for all your effort to maintain this package.

  3. Christopher

    I’m now having an odd issue where the Crashplan client doesn’t launch. I seems to be stalling on the start up screen. Anyone else having this issue?

    Crashplan is running on the Synology just fine.

    1. Alon Gonen

      I run into the same problem , the package is no longer available under community, along with a bunch of other packages. Is this a new Synology policy ?

  4. Peter Kuiters

    Quick question, I cannot see the package on the page. Which information should I provide before I can ask the question “what am I doing wrong?”. :-)

  5. Peter Kuiters

    Hi, I think I have an issue. I tried to un-install and re-install. Part one worked fine, but Crashplan does not show up for installation in the package center. I am running version DSM 6.0.2-8451 Update 1

    Any thoughts?

    1. Ralf

      Have the same issue and can’t find a solution… even installed very thing etc…. seams a bug with this version of DSM?…. is there away to download the package and add manual?

  6. TD

    Thanks Singularity

    I have renamed the upgrade folder to upgrade.tmp and create a file called upgrade which should prevent the client from trying to upgrade.

    Let’s see how long this keeps me going for. It will be interesting to see if somehow they “force” your archive server side to only work with 4.8.0 as patters says.

  7. Yves Soers

    Today I received a new update which works on 1 NAS but not on another one.

    I have version 4.8.0-0042
    I used to have version 4.8.0 GUID 671960287316279396
    Today new version arrived 4.8.0 GUID 1435813200480

    Log says upgrade installed.
    Crashplan stopped version 4.8.0 GUID 671960287316279396
    synology extracting upgrade from ../1435813200480_316.jar

    Tried to start Crashplan and got the message “Could not start package”.
    Tried it again, no error message now, but package doesn’t start anymore.

    On my second NAS package started without any problem retrying it after the message “could not start package”. However it still shows that the ‘old’ version is running.

  8. Per


    I downgraded my DS715 (non-Intel CPU) to 4.7.0 and made the /var/packages/CrashPlan/target/upgrade directory to a single read-only file. This looks to prevent the auto-update from happening.

    Cookbook for the directory:
    cd /var/packages/CrashPlan/target
    mv upgrade upgrade.tmp
    touch upgrade
    chmod 444 upgrade

    This at least is a stop gap measure until Crashplan disconnects you if you’re not on 4.8…

    1. GaryS

      Thanks Per. That’s helpful.

      To all:
      Please post any alternatives to CrashPlan you have had success with.

      1. ccanzone

        Hello Everyone,
        I’m going to test Amazon Cloud Drive. It costs about the same as CrashPlan (U$ 60 / year) and it’s unlimited too. Since one of my major applications on Synology is Plex Media Server, and Plex is launching a cloud service (that uses Amazon Cloud) I’m going to give it a try. Backups can be easily done through CloudSync.

      2. Kevin

        Additionally, Synology has the HyperBackup package that supports Amazon Cloud. It is in a beta version, and I’ve just started setting it up, but it might work better then the CloudSync

      3. Ian H

        I’ve just tried BackBlaze which seems good enough.
        – No mobile app to review my files.
        – pay for what you use – rather than unlimited. I’m only using about 350 Gb which is still cheaper than what I paid for crashplan per year.
        + faster than crashplan

    2. Dimi

      Do you think this will work on 4.8? Mine is upgraded successfully to 4.8 but I really want to stop this from screwing itself up every time!

    3. Min

      That’s really helpful. Thank you.

      Is there anyway to prevent the desktop client from upgrading in the future as well? Mine is still 4.7.0 atm but I imagine it will be soon until its forced to upgrade to 4.8.0 and disconnect from headless client.

      1. TD


        I had to downgrade Crashplan on my DS413 from 4.8.0 to 4.7.0 to get this working again. While trying to do this I had updated the desktop client manually to 4.8.0 and I have left it like that. Seems to have no problem connecting to the 4.7.0 backend service.

  9. Thor Egil Leirtrø

    Everything stopped after being hit by a new update from CP again yesterday.
    I tried uninstalling and then reinstalling the patters package – with system default java.
    Had to fix the port no and guid in the .ui_info – and re-enter the crypto key. Now its synchronizing file info – fingers crossed.

    DS412+, 2GB RAM, DSM 6.0.2-8451 Update 1

    1. Thor Egil Leirtrø

      The thing went into endless restart loop. Found it was caused by USR_MAX_HEAP having been set back to default. Fixed the value (1536M for 2GB RAM) – operation now seems to be back to normal.

  10. cesar dlg

    i’m on a synology ds1513+. i can see the files i want to backup using the headless client but it won’t back them up. despite what i have checked,it only says 1 file, 0MB.

  11. Jon Etkins

    Annnnd, they broke it again. All was going great after installing the latest upgradepackage, then this:

    I 10/05/16 05:11PM Starting backup from Jon’s PC: 81 files (67.30MB) to back up
    I 10/05/16 05:12PM Completed backup from Jon’s PC in < 1 minute: 79 files (66.90MB) backed up, 39.80MB received @ 11Mbps
    I 10/05/16 05:12PM Downloading a new version of CrashPlan.
    I 10/05/16 05:13PM Download of upgrade complete – version 1435813200480.
    I 10/05/16 05:13PM CrashPlan has downloaded an update and will restart momentarily to apply the update.
    I 10/05/16 05:13PM Installing upgrade – version 1435813200480
    I 10/05/16 05:13PM Upgrade installed – version 1435813200480
    I 10/05/16 05:14PM CrashPlan stopped, version 4.8.0, GUID 684266357451653131
    I 10/05/16 05:14PM [Default] Stopped backup to CrashPlan Central in 1.2 hours: 0 files (938.50MB) backed up, 174.10MB encrypted and sent @ 243Kbps (Effective rate: 764.1Kbps)
    I 10/05/16 05:14PM – Reason for stopping backup: The backup destination was disconnected
    I 10/05/16 05:14PM Not ready for backup from Jon’s PC. Reason: The destination is not available

    And it now fails to restart again, with the same lack of information in the log files that indicates it's probably failing to find Java again. Guess I'll try reinstalling once more, and I'll try using Patters' own Java this time since he's fixed the previous bug that was preventing it from working on x64 boxes.

    1. Kipik

      This fix works:
      Copy cpio to /bin (this is probably necessary after DSM’s firmware is upgraded):
      sudo cp /var/packages/CrashPlan/target/bin/cpio /bin/cpio
      Uninstall CrashPlan using DSM Package Manager
      Make sure you have Java8 from Synology installed. If not, install it.
      Reinstall CrashPlan and select “Default System Java Version” in the installer options
      Let it run for a while, until it stops (may be necessary to repeat start/stop). If you check the logs via Package Manager it will have tried to upgrade.
      Now SSH the NAS and edit the file /var/packages/CrashPlan/target/install.vars, redirecting to a correct Java Package:
      Synology official Java 8 Package:
      Start the package and wait. You should now see it running 4.8.0, via logs.
      After fixing .ui_token you’ll have to login on your client app.
      If you were using a custom Java heap size, don’t forget to change it again.

  12. Gary Cohen

    During the latest upgrade, I too, am not getting the package to run. In the upgrade logs I’m seeing: Could not find JAR file /volume1/@appstore/CrashPlan/bin/../lib/com.backup42.desktop.jar

    (There is no lib folder in the CrashPlan directory)

    And then when running the package, the engine_error.log reports: Error: Could not find or load main class com.backup42.service.CPService

    I’m running a DS415+ with DSM 6.0.2-8451 Update 2.

    1. John

      I had the same problem: it seems that a failed update deleted parts of CP.
      I removed the package, installed again without running, edited the vars file, ran, waited, stopped and ran again. I could now login and continue normally.

  13. Mark

    Hi Patter, thanks for all the help over the years. As a non-Intel Synology user, looks like this is the end of the road for me. I still have a couple of years left on my subscription, so I’m working on setting up my Windows 10 PC to map the Synology network drives. So far it’s going well, I have adopted my Diskstation Crashplan account, mapped the folders as network drives, and am now waiting for all the information to sync up so I can remove the references to the old Diskstation folders.

  14. Hal Sandick

    So let me ask this question. I’m on a ds213+ non-intel, crashplan 4.7. Base on synlogy support information, I assume that crashplan will not be turning off support for 4.7 for a while ( Therefore, can I keep running for the next few months? I plan to upgrade to a synology intel server but, would like to put it off for until 1/17.



  15. Tom


    I am running the crashplan patters package on my synology 214+ with non intel CPU.
    Can anyone confirm that there will be no more support for non-intel CPU and those people will have to look for another solutions (with syno support, like idrive, …)

    ps : patters, thank you for all the hard work so far.
    kind regards, Tom

  16. Mike Hardy

    @patters – thank you so much for supporting CrashPlan on Synology for so long. It was a struggle for the users, and for you as well I am sure but it could always be made to work because of your effort and the support here.

    I just sent you £20 via PayPal donation as sincere gratitude.

    I have also done an initial backup do BackBlaze B2 using the officially supported CloudSync integration from Synology. Now that BackBlaze is no longer beta, and they seem nearly feature-identical w/CrashPlan – including restore scenarios – it was worth a shot. Everything tested out fine and official support is a beautiful thing in the operations world, so I’m uninstalling this package. I deleted my Crashplan account, leaving them a friendly note that they have a good offering but the lack of official support for headless clients and open APIs means it’s time to go.

    Good luck to all of you guys with the next Crashplan auto-pushed version upgrade :)

      1. Rex

        Hi Patters
        I have a problem. I had updated my 214 Play a few days ago to 4.8 of crashplan and all went fine. It continues to backup several pc’s without any problems for several days. I just updated my 214Play to the latest Synology 6 release. I then noticed that Crashplan was no longer working. I tried to restart it but just stops ever time. I noticed in the logs that it just updated Crashplan to 480?? I thought I did that several days ago with your package? Did update with your package before I should have because I had not really received the 480 update from Crashplan?
        If so how do I re-install your package? Or is there another way to fix it.

        Thanks for your help…

    1. davidjpatrick

      Hi Mike – how did you set-up your buckets? I’ve tried uploading using the free 10GB and all went through OK – downloading though seems the files are corrupt… Have you tried and download and did you use private or public bucket?
      Thanks Dave

      1. Dimi

        Wow, a very good point about restore testing which I’m sure is missed and even with Crashplan ive never tried a restore! I will test a restore now and see what happens. My bucket is just one and I have created multiple folders in that bucket for the various shares I have to backup. If you sync to the root of the bucket you won’t be able to add new folders to the sync job. To add new folders you have to create a new sync job and add the new folder. Also I found that the buckets have to be globally unique. So you can’t have documents or photos or videos because they are taken… Apparently Amazon do it the same way which I find a bit weird!
        I will take a restore and let you know how it goes.

      2. davidjpatrick

        Ok, it’s because I had encryption turned on! So, you can’t download a file through backblaze file browser with encryption on – only the sync app can do this if you sync in both directions, i.e. if you delete a file locally it will be restored to the same location – only if using a bi-directional task. It would appear that there is no restore functionality in the sync app – it does what it says sync’s files and folders which isn’t really a backup solution, i.e. multiple copies of files / restore to another location etc…
        Seems a bit dangerous to me – if you delete files/buckets from the backblaze website would it delete everything as well locally on the NAS?
        I will keep testing but it really doesn’t seem a good option to upload files without encryption if you want to restore from the web
        Unless there’s a cloud browser app on the NAS but not seen one yet.

      3. sebweaver

        You can change the sync mode to “upload only”. Thus, your local data won’t be harmed if you mess something up with the backblaze website. You can also decide what happen to local files you have deleted.

        If you want to restore encrypted data from the web, you can use their API or their CLI and download and decrypt data with the private key that Cloud Sync gave you when you set up the encryption. There a post about that which points to samples on Github:

        It’s not as trivial than with a GUI, but it’s possible, at least.

        I did not try for now.

      4. Dimi

        Right, I now have my doubts about Backblaze. It is indeed cheaper, for me at least, than Crashplan, support is top notch, used it already, it has a supported client for Synology, so none of the Crashplan cra@p.
        BUT this is Cloud Storage, not back up. It syncs data rather than backing it up. As much as I try to find a way around it I keep going back to the backup vs storage. I don’t need cloud storage, I have plenty in my Synology and do not need any public ones. However I do need backup! Hyperbackup has some interesting vendors but they are mostly enterprise level vendors and pricing is just not given in human readable format or not given at all… I wanted to also try Glasier and went on to open an account and got to a point where it was asking for a credit card. Sure I will give them my credit card but I want to first try it, see how it works and then commit… so… it pains me to say this but I am considering sticking with Crashplan and their *****ed up client and non-existent support and the great support that we all get from Patters! The search continues for a proper solution, a backup one, not a storage one. I am getting to the point where I would just stick another Synology in my mom’s house and backup that way. Surely in this digital world Crashplan can’t be the only option?!?!?!

      5. Richard

        Maybe amazon cloud drive in combination with hyper backup from Synology is an option for some of us?

      6. Monte

        Yeah, I agree. Crash plan is a true backup compared to others for the cost. It’s great that Patters is helping so much with this. It’s a total joke that these other companies are putting together packages and support that works flawlessly with synology units. Seems like the CrashPlan leadership has no vision at all regarding the potential customers they would gain for supporting the synology units. I bet they could put something together overnight to fix the issue if they wanted to. The problem is that they don’t want to. It’s a joke.

      7. jerome

        Not just Synology users but *all* NAS users! Since it’s clear on their site that CP is “unsupported” on NAS platforms. I find that a bit ironic to begin with… plus, I don’t know of many companies who explicitly go out of their way to list “unsupported platforms” — that aside, I agree it is a great product, but their product management team needs to get it together.

      8. GaryS

        To All: Amazon Cloud Drive and Synology Hyper Backup seem to be working just fine. No problems so far. Synology 211j non-intel NAS. 300 gb so far. Test Restores work so far.

  17. SR

    I uninstalled CrashPlan so i could try a reinstall.
    Only now CrashPlan doesn’t show up anymore in the Community section.
    Other packages of LoadLetter are still visible.

    Version: DSM 6.0.2-8451 Update 2

    Any thoughts?

    1. Jon Etkins

      Judging by the number of posts wondering why CrashPlan has disappeared from Package Manager – presumably from non-Intel NAS users who have failed to find the answer buried in this thread (and who can blame them, really) – May I suggest that you reinstate a package for non-Intel platforms which simply pops up a message explaining why it’s no longer available?

  18. thaBadfish

    I was alerted over the weekend that our work server had not reached CrashPlan’s server in 5 days, because our Intel Synology package had been updated for 4.8.0. Earlier today, I reinstalled the CrashPlanPro package, and I was able to stop then start it successfully, but it wouldn’t actually do anything. It had been awhile since I needed to access CrashPlanPro via the client installed on a computer, and the original computer I used to set it up had since been reformatted. Checking for the application downloads via crashplan’s website, I saw that my account was still on 4.7.0. Very recently, I checked again and it was updated to 4.8.0, so all I had to do was download that client, set up the client via the SSH instructions above, and everything started working again. It obviously had to rescan our entire server, but after rescanning, the backup is running and everything is good. Intel Synology running 4.8.0 CrashPlanPro and Windows 10 x64 running 4.8.0 client.

  19. robert

    Hi guys,

    after the latest changes i canceld my crashplan subscribtion, now using Amazon cloud Drive, Hyper Backup for NAS and Duplicati for Windows – works well so far

    Hope that helps, and thanks @patters for all the good work

  20. B. Goodman

    I was running Patters 4.7-0041 and Patters Java 8 1.8 101 0041 just fine until today. Now Crashplan has stopped running, on failing to auto upgrade. So I tried to accept Patters Crashplan 4.8 -0042 upgrade, selecting “dedicated installation of java 8”, but it wanted Java 8 in the “public” folder (despite Java 8 101 already running).

    So can I just dump the Java 8 101 again into the public folder and continue? Or do with the Synology default Java? Or something else? I’m on a DS412+ with default RAM.

    1. Singularity

      @BG – I have a 412+ running 5.2 and I’ve had much better luck using System Java vs the Integrated one. I’m currently running Java Manager and Java 8 102 and it’s working great!

      Good luck!

  21. guitargod

    Hi everyone! Crashplan worked so well for me on my DS215j…I backed up the Nas to an external Drive which I had attatched to my pc at work.

    Can anyone recommend me an alternative? How could I possibly achieve the same easy way to back up my Nas on a daily basis? I thought on Rsync or Webdav as alternative: Unforunatley I can’t run an rsync daemon in my network at work…Crashplan seemed to go over http-port, which worked perfectly :-/

    Thanks in advance!

    1. Marc Weinstock

      I found a old netbook in my closet, upgraded it to win10 and only app on the netbook is crashplan. Then “net use $:\ \\Nas\Drive” to map my shares and sync up crashplan with the NAD shares. This has been working nicely for 3 days now. Added benefit, the NAS now hibernates.

  22. Fred

    Well this is certainly annoying. Looks like I either have to change backup services or upgrade my NAS. I’ve been a really happy Crashplan customer for several years now, so I don’t want to give it up. On the other hand, my DS715 is exactly one year old and I had zero intention of upgrading again.

    Right now I am leaning towards moving to a DS716+ii as the best solution for me. Swapping out a new NAS is pretty easy (I’ve upgraded from a DS211j->DS213->DS715) and evaluating new backup solutions sucks. I should be able to sell my DS715 and buy the DS716+ii for less than $100 net. Seems better than evaluating/testing new backup providers.

  23. nataylor999

    I’m having some trouble setting the Java heap size. I stop the CrashPlan package, then edit /var/packages/CrashPlan/target/bin/run.conf to change the max heap from 1024 to 2048. However, when I restart the package, the max heap gets reset to 1024 and the run.conf file seems to get overwritten, showing a max heap of 1024 again.

    What am I doing wrong?

    1. patters Post author

      Look at the third bullet point in the Notes section – you need to edit a different file, because run.conf can sometimes be overwritten by Code42.

      1. John

        My syno_packages.vars only contains the following after a clean install:

        What is the expected format of this file, if I want to specify more memory for CP?

      2. John

        Never mind – it seems that an update deleted a lot of the CP files and then failed.
        A fresh install provided a fresh syno_package.vars to start with along with a working CP.

  24. Aaron

    Anyway you can have your upgrade look at /var/packages/CrashPlan/target/syno_package.vars and not overwrite if a custom value was set before?

  25. Jon Etkins

    Following other folks’ suggestions, I’m trying out Hyper Backup to Amazon Drive. Setup was a snap, $60/year for unlimited storage (after 3 months’ free trial!), and it’s a bazillion times faster than Crashplan ever was, to the point where it’s actually maxing out my 3Mbps uplink. To get my first Terabyte to Crashplan, I used their (no no longer offered) hard drive upload service, but at this rate I’ll have a full backup on Amazon in just a few days.

    I’ll probably persevere with Crashplan until my (recently renewed) annual subscription expires or Patters or I give up on it, whichever comes first. After that, I’ll probably go back to CrashPlan’s free service level to back up my PC’s to a mapped drive on the NAS, and back up those backups to Amazon Drive from the NAS.

    1. Min

      I’m currently considering between Amazon Drive and Backblaze. Could anyone please help analysing the pros and cons of the two? Much appreciate.

      1. Jon Etkins

        I’m only aware of folks using BackBlaze B2 storage as a destination for Cloud Sync, which IMO is NOT a backup solution. Backup provides the ability to retain multiple copies of a file, and to retrieve it even if the original id deleted, whereas Cloud Sync – as the name suggests – merely synchronizes between two storage locations. If you delete a file, it’s deleted from the sync’ed location as well, and if the original get corrupted, that corruption gets sync’ed as well, both of which defeat the purpose of backup.

        However, if there’s a native Backblaze backup client that runs on the NAS, as Crashplan does (or used to, in the case of non-Intel folks), then I’m sure folks would love to hear about it.

      2. Dimi

        You can encryption data with cloud sync and B2, you can also set it to do a one way sync only and B2 retains a copy of deleted files for 30 days but snapshots you have to do manually. I did give B2 a go but the fact is that this is cloud storage and NOT backup as pointed out by some already.

        Now trying Amazon drive but at 55 GBP a yesr it’s rather expensive for my 130GB data… So for now I think I will have to stick wirh Crashplan or just stick another NAS at my work or my mum’s place and just be done with this never ending backup saga!

      3. Jon Etkins

        Another reason to use a real backup client vs Cloud Sync: encryption. Personally, I don’t want unencrypted versions of my personal files out there in anyone’s cloud.

    2. jamesdeanreeves

      I looked at Backblaze B2 and it just wasn’t an option for my 4TB – that would have been as pricey as Amazon Glacier storage, which I dumped after discovering CrashPlan.

      I’m trying Amazon Drive now, but unlike Jon E’s experience, it looks like the connection to Amazon is being throttled. Testing my max upload speed shows it’s 3.5Mbps, while at the same time I’m only uploading to Amazon Drive at 700-900Kbps. It took me almost 8 hours to upload 15GB last night.

      Looking at the Amazon Drive web interface, it appears (or it’s not very intuitive) you can’t really see the files/dirs you backed up to Amazon Drive as they all show as something like ’71.bucket.2′, which is totally useless. When you try to download the file off the web interface, it downloads the file exactly as ’71.bucket.2′ – useless. I’m assuming you have to have the App installed on your computer to read the files as you expect them to appear.

      You can however see the files and download them as you would expect off CrashPlan’s Web interface.

      It seems to me that Amazon Drive still has a lot of growing to do before it gets to CrashPlans level with price and features. Yes, it is a GD PITA dealing with keeping CP running on a headless NAS, but for now I’ll be sticking with CP.

      As a side note, if you have Amazon Prime, you have unlimited photo storage on Amazon Drive for free.

      1. Jon Etkins

        The reason you can retrieve files from CrashPlan’s web site is because they own both the storage and the tool that put the files there. Amazon Drive is simply cloud storage that can hold any files you care to put there, so if you use a backup tool like Hyper Backup, which compresses (and deupes?) your backups, and indexes multiple versions of your files, then yes, you will have to use the same (or a compatible) tool to restore them.

      2. Alex

        I noticed that Synology offers a desktop app called Hyper Backup Explorer to browse and decrypt data from a Hyper Backup repository. That may be your best bet.

      3. jamesdeanreeves

        Thanks for the info, Alex. I checked out Hyperbackup Explorer and found that you can only view backups that reside on your local network and not on Amazon Drive. To browse a backup, Hyperbackup Explorer needs a *.bkpi file, which is a proprietary format to Synology’s Hyperbackup.

        Jon is correct in that Amazon Drive is only Cloud storage and doesn’t care what type of file is put out there. Hyperbackup creates its own proprietary file types for compression, encryption, etc, and Amazon Drive doesn’t care what it is and just stores the data just as Hyperbackup stores it in a local backup.

        One thing I noticed with Hyperbackup is that it doesn’t give you the option of restoring individual files through the Hyperbackup GUI – only entire folders. It also does not give you the option to restore to a different location, which is highly useful.

        You can however use Hyperback Explorer to accomplish the above. But since you can’t use Hyperbackup Explorer on an Amazon Drive archive, it or Amazon Drive doesn’t meet my needs.

        So there are definitely trade-offs with either CP and Hyperbackup/Amazon Drive, and each will fit the needs of each person better than the other. With that said, the search for the Backup Holy Grail continues.

      4. Jon Etkins

        “One thing I noticed with Hyperbackup is that it doesn’t give you the option of restoring individual files through the Hyperbackup GUI – only entire folders. It also does not give you the option to restore to a different location, which is highly useful.

        “You can however use Hyperback Explorer to accomplish the above. But since you can’t use Hyperbackup Explorer on an Amazon Drive archive, it or Amazon Drive doesn’t meet my needs.”

        Not quite true, James. You can open a built-in Backup Explorer from the Hyper Backup control panel in DSM (the button next to “Back up now”), and from there you can drill down to individual files, and by right clicking them, select “Copy to …”, “Restore”, or “Download”. “Copy to …” is poorly named, because what it actually does is restore to a different location, and it’s available for folders as well as files. QED.

      5. jamesdeanreeves

        Thanks, Jon – I missed that button! I was using the “Restore” button in the lower left and selecting “Data”.

      6. Alex

        “I checked out Hyperbackup Explorer and found that you can only view backups that reside on your local network and not on Amazon Drive. To browse a backup, Hyperbackup Explorer needs a *.bkpi file, which is a proprietary format to Synology’s Hyperbackup.”

        You can use one of the third party tools available to map S3 to a local network drive to overcome this problem.

  26. Jon Etkins

    Screen shot from my firewall monitor, showing relative upload bandwidth between CrashPlan (before 10/14) and Hyper Backup to Amazon Drive (from 10/14 on):

  27. Cus Tomer

    Do you see any chances that support for non-Intel-CPUs will come back, or do I really have to change my online backup service provider?

  28. nh_ets

    re-installed to 4.8 and got the client to launch locally (4.7 was working great on my 1513+). But after making the file changes in terminal, when I launch client on my Mac it just bounces and then goes away but never successfully launches! What did I do wrong?

  29. frold

    I might be a little slow – but are there any solution for us non-intel user to get crashplan to work?

    Im on DS212j and it seems to be a Marvell ARM 88F6281, single core, 1.2 GHz CPU.

    What do I do?

    1. George

      If you do not need to connect to Crashplan Central for cloud backup (I use my non-Intel NAS for local backup of clients), this approach to preventing CP from upgrading has worked for me until now.

      Posted by Per on Oct 6

      cd /var/packages/CrashPlan/target
      mv upgrade upgrade.tmp
      touch upgrade
      chmod 444 upgrade

      So you could install 4.7 from patters archive, apply this change

    2. frold

      Thanks for the reply meanwhile I did the following:

      Un-installed CrashPlan 4.7.0-0041 (that used the automaticly Java installation)
      Restarted the NAS.

      Installed Java SE Embedded 7 1.7.0_75-0041
      Restarted the NAS
      Reinstalled CrashPlan 4.7.0-0041 (this time using the Java 7 I did install (the second dialogue box))
      Restarted the NAS
      Did the token value thing you can read in the package description.

      Until now I have been up running for more then 12 hours.

      We may see if Im forced to upgrade, if so I will try your patch. Until now it works without.

      Im testing backblaze using Cloud Sync in DSM – seems to be a good solution as well (half the price of CrashPlan with my 450gb). But I have an issue where some folders with images isn’t uploaded to backblaze. So I did add a support ticket to BackBlaze that recommand I did contact Synology, so I did so. Im waiting their answer.

      I don’t feel safe about a solution that only backup some of my files :S

      1. davidjpatrick

        I think there are known issues with Cloud Sync and Hyper Backup with cloud providers – I’ve been looking at the forums and tried allsorts such as Hyper Backup to Amazon S3 (yes I’ll have to take the hit on pricing if I go this route), Hyper Backup to Amazon Cloud Drive (paid yearly about the same price point as CrashPlan – 3 months trial), Cloud Sync to Backblaze etc… I haven’t yet come to a conclusion on either of them. Hyper Backup to Amazon Cloud Drive is still going – it’s slow – but Hyper Backup overcomes file size restrictions by chunking up large files + you do get versioning. Cloud Sync isn’t a backup solution – it does as it says – and if you mis-configure the sync task you could end up with a situation where files are removed from your NAS, i.e. changes could be sync’d from Backblaze back to your NAS – you can do upload only though to overcome this. I’ve now seen a file decryption tool on the Synology website where you can download individual files and decrypt locally. If you encrypt your uploads – which most people are trialling – then you need a mechanism to decrypt if you need to restore files. Seems quite a long-winded route to me.
        I’ve also seen a tool where you can mount cloud drives – netdrive – 30 days trial but quite costly – and in conjunction with Hyper Backup explorer (for WIndows) you can in theory restore to another machine – not just the NAS (albeit at a cost for NetDrive). For me this is OK as Cloud backups are the offsite solution – primary backup is to a local USB disk.
        Have a look at some of the forum posts for both Cloud Sync and Hyper Backup – there’s a lot of info out there – just takes time to find…
        Another downside to Hyperbackup is it will only perform one task – if you use it to backup locally and to cloud one task will be waiting so the initial sync needs to be broken up in to multiple tasks if you have a lot of data. With Hyperbackup to USB and Crashplan to Cloud this was the best solution – and it worked – and was stable – never ever failed due to file sizes, file types etc…
        My searching is still ongoing to find a native app – I do have an Intel Linux box in my garage with mounts to the NAS with Crashplan still happily working away but my ideal would be to turn that off at some stage.

      2. frold

        Regarding backblaze it seems to have one advantage when you restore files it keeps the creation date of the file, as you download the files as a zip.

        I have tons of images and videos of my kids and family (450gb) and they are all sorted due to file creation date. If I loose my HD and have to restore the files I would like to keeps the creation dates. BackBlaze does so!

        Other solution that does it?

  30. steve

    Hi guys. Based on patters’ info, can I confirm that the newer CrashPlan is no longer supported on ARM-based platforms such as my DS414? Frankly, I’m very tired of CrashPlan breaking all the time after updates, so I’m done with CrashPlan.

    1. TD

      I am on DS413 and so yes it is no longer supported for me. For now I have gone back to 4.7 of crashplan which is working fine. I have also renamed the “upgrade” folder to upgrade.tmp and create a file called upgrade which will prevent crashplan being able to upgrade.

      Don’t know how long this will last but if it keeps me going for a few months or even a year I will happy while I evaluate new solutions or move to a newer device.

  31. nh_ets

    re-installed to 4.8 and got the client to launch locally (4.7 was working great on my 1513+). But after making the file changes in terminal, when I launch client on my Mac it just bounces and then goes away but never successfully launches! What did I do wrong?

  32. Matt

    I uninstalled the package and now when I try to reinstall, I get the error “This package is published by an unknown publisher.” I have the PCLoadLetter certificate installed and the Trust Level set to “Synology Inc. and trusted publishers”, so I’m not sure what’s going on. Has anyone else run into this problem or know of a solution? For security reasons, I don’t want to set the Trust Level to “Any publisher”.

  33. Per

    I checked the alternatives to CP but I couldn’t find anyone with unlimited GB and that also never deletes orphaned files.

    So I decided to go with a new 216+II and now I’m back online on version 4.8.

    @patters, made a small donation again to say thanks for all your work.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s