CrashPlan packages for Synology NAS

UPDATE – The instructions and notes on this page apply to all three versions of the package hosted on my repo: CrashPlan, CrashPlan PRO, and CrashPlan PROe.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient – it could even get stolen or destroyed and you would still have your data. Whilst you can pay a small monthly charge for a storage allocation in the Cloud, one neat feature CrashPlan offers is for individuals to collaboratively backup their important data to each other – for free! You could install CrashPlan on your laptop and have it continuously protecting your documents to your NAS, even whilst away from home.

CrashPlan-Windows

CrashPlan is a Java application, and difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created a few others. As I started I used information from Kenneth Larsen’s blog post, the Vincesoft blog article for installing on ARM processor Iomega NAS units, and this handy PDF document which is a digest of all of them. I used the PowerPC binaries Christophe had compiled on his chreggy.fr blog, so thanks go to him. I wanted make sure the package didn’t require the NAS to be bootstrapped, so I bundled any dependent binaries. Back in 2012 I didn’t know how to cross compile properly so I had to use versions I had found, but over the years I have had to compile my own versions of many of these binaries, especially as I added support for Synology’s huge proliferation of different CPU architectures.

UPDATE – For version 3.2 I also had to identify and then figure out how to compile Tim Macinta’s fast MD5 library, to fix the supplied libmd5.so on ARM systems (CrashPlan only distributes libraries for x86). I’m documenting that process here in case more libs are required in future versions. I identified it from the error message in log/engine_error.log and by running objdump -x libmd5.so. I could see that the same Java_com_twmacinta_util_MD5_Transform_1native function mentioned in the error was present in the x86 lib but not in my compiled libmd5.so from W3C Libwww. I took the headers from an install of OpenJDK on a regular Ubuntu desktop. I then used the Linux x86 source from the download bundle on Tim’s website – the closest match – and compiled it directly on the syno using the command line from a comment in another version of that source:
gcc -O3 -shared -I/tmp/jdk_headers/include /tmp/fast-md5/src/lib/arch/linux_x86/MD5.c -o libmd5.so

Licence compliance is another challenge – Code 42 Software’s EULA prohibits redistribution of their work. I had to make the syno package download CrashPlan for Linux (after the end user agrees their EULA), then I had to write my own script to extract this archive and mimic their installer, since their installer is interactive

 

Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    Community-packages
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 7 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java archive recognised, try downloading it with a different web browser. Some browsers try to help by uncompressing the file, or renaming it without warning. I have tried to code around most of these behaviours. Use Firefox if all else fails – it seems to be the only browser that doesn’t interfere with the file. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This is configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
  • If you previously installed CrashPlan manually using the Synology Wiki, you can find uninstall instructions here.
 

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. You will need to edit the file conf/ui.properties in the CrashPlan folder on that computer so that this line:
    #serviceHost=127.0.0.1
    is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
    serviceHost=192.168.1.210
    Mac OS X users can edit this file from the Terminal using:
    sudo nano /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties
    (use Ctrl-X to save changes and exit)
  • Starting with CrashPlan version 4.3.0 you will also need to run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding (0.0.0.0 means the server is listening for connections on all interfaces) e.g.:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,0.0.0.0
    port,--------------token-----------------,binding

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address (this was a new requirement with CrashPlan version 4.4.1).
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,192.168.1.100
    assuming that the Synology NAS has the IP 192.168.1.100
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following file:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items

 

Notes

  • The package downloads the CrashPlan installer directly from Code 42 Software, following acceptance of their EULA. I am complying with their wish that no one redistributes it.
  • Real-time backup does not work on PowerPC systems for some unknown reason, despite many attempts to cross compile libjna, and attempts to use binaries taken from various Debian distros (methods that work for the other supported CPU architectures).
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • The default location for saving friends’ backups is set to /volume1/crashplan/backupArchives (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
    /var/packages/CrashPlan/target/log/engine_output.log
    /var/packages/CrashPlan/target/log/engine_error.log
    /var/packages/CrashPlan/target/log/app.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    This is particularly useful on ARM systems because CrashPlan currently prevents hibernation while it is running (unresolved issue, reported to Code 42). Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_PATH="http://download2.code42.com/installs/linux/install/${SYNOPKG_PKGNAME}"
CP_EXTRACTED_FOLDER="crashplan-install"
OLD_JNA_NEEDED="false"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.0_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.8.0_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  CP_EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
  OLD_JNA_NEEDED="true"
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER="3.6.1.4"
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
  DOWNLOAD_FILE="CrashPlanPROe_${CPPROE_VER}_Linux.tgz"
fi
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tar.xz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-old-${SYNO_CPU_ARCH}.tar.xz"   
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
[ "${OLD_JNA_NEEDED}" == "true" ] && INSTALL_FILES="${INSTALL_FILES} ${OLD_JNA_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  DOWNLOAD_URL="http://tinyurl.com/javaembed"
  EXTRACTED_FOLDER="ejdk1.8.0_101"
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BINARY="ejdk-8u101-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BINARY="ejdk-8u101-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BINARY="ejdk-8u101-linux-armv6-vfp-hflt.tar.gz"
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BINARY="ejdk-8u6-fcs-b23-linux-ppc-e500v2-12_jun_2014.tar.gz"
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
    EXTRACTED_FOLDER="ejdk1.8.0_06"
    DOWNLOAD_URL="http://tinyurl.com/java8ppc"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BINARY="ejdk-8u101-linux-i586.tar.gz"
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BINARY="jre-8u101-linux-x64.tar.gz"
    JAVA_BUILD="Linux x64"
    EXTRACTED_FOLDER="jre1.8.0_101"
    DOWNLOAD_URL="http://tinyurl.com/java8x64"
  fi
fi
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile


pre_checks ()
{
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    )

    JAVA_BINARY_FOUND=
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.gz ] && JAVA_BINARY_FOUND=true
     
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi
  else
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1
      fi
    fi
  fi
}


preinst ()
{
  pre_checks
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    cd ${PUBLIC_FOLDER}
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      #Chrome
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      #Safari
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    fi
    mv ${EXTRACTED_FOLDER} ${SYNOPKG_PKGDEST}/jre-syno
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  fi
   
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}
  
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  exit 0
}


preuninst ()
{
  `dirname $0`/stop-start-status stop

  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  [ -e ${OPTDIR}/lib/libffi.so.5 ] && rm ${OPTDIR}/lib/libffi.so.5

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

 exit 0
}


preupgrade ()
{
  `dirname $0`/stop-start-status stop
  pre_checks
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk


TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
ENGINE_CFG="run.conf"
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
OPTDIR="${PKG_FOLDER}/target"
PID_FILE="${OPTDIR}/${DNAME}.pid"
DLOG="${OPTDIR}/log/history.log.0"
CFG_PARAM="SRV_JAVA_OPTS"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
FULL_CP="${OPTDIR}/lib/com.backup42.desktop.jar:${OPTDIR}/lang"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile


start_daemon ()
{
  #check persistent variables from syno_package.vars
  USR_MAX_HEAP=0
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  fi
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/
  fi

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to start-stop-status.sh from installer.sh because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
    fi
  done

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/restartLinux.sh

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the chrisnelson.ca method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
 
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0
  fi

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/libffi.so.5 ] && cp -f ${OPTDIR}/bin/libffi.so.5 ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/libjtux.so ] && cp -f ${OPTDIR}/bin/libjtux.so ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/
  fi

  #create or repair libffi.so.5 symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/libffi.so.5 ]; then
    if [ ! -e /lib/libffi.so.5 ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
      ln -s ${OPTDIR}/lib/libffi.so.5 /lib/libffi.so.5
    fi
  fi

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80
  elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192
  elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384
  elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512
  elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024
  fi
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
    JAVA_MAX_HEAP=${USR_MAX_HEAP}
  fi   
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
    $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
  fi
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
  fi

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
     
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars
    fi

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Djava.net.preferIPv4Stack=true\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
   else
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
  fi

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    fi
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

  else
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars
  fi

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    fi
  else
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1
  fi
}

stop_daemon ()
{
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/
}

daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}

wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

install_uifile & upgrade_uifile

[
  {
    "step_title": "Client Version Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
          {
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": true
          },
          {
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_3614",
            "desc": "3.6.1.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
          }
        ]
      }
    ]
  },
  {
    "step_title": "Java Runtime Environment Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
          {
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
          },
          {
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true
          }
        ]
      }
    ]
  }
]
 

Changelog:

  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library libc42archive.so
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of start-stop-status.sh
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the chrisnelson.ca method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required
    libjnidispatch.so taken from Debian JNA 3.2.7 package with dependency on newer libffi.so.6 (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release
 
 

5,913 thoughts on “CrashPlan packages for Synology NAS

  1. jamesdeanreeves

    Well, I stumbled across a nasty little quirk of Hyperback backing up to Amazon Drive yesterday…

    I lost power yesterday, and even though my UPS kept everything up, if UPS support is enabled on the NAS, DSM puts the NAS into Safe Mode, stopping all services and unmounting the volumes.

    When the power was restored, DSM started all services and mounted the volume. However, the backup I had running via Hyperback was in a “canceled” state. Starting it again started it a ground zero!

    It had been running for 2 weeks and had a little over 50% complete of 3.4TB….all down the drain!

    There is no option for resuming the backup, so it looks like that sux a big one. If you’ve got small backup sets, it’s not much of an issue. But hundreds of Gig? BIG ISSUE!!!

    It seems that Hyperback is a little too limited and not robust enough for my needs, and Backblaze is too expensive, so sad to say, I’m going to have to tuff it out with CP until there’s a better solution.

    Reply
    1. Jon Etkins

      I discovered that “feature” during my initial testing, when I deliberately canceled a backup and found that the progress to that point got discarded. Accordingly, I started by selecting only a subset of the directories I wanted to include. ran a backup, added some more, ran another backup, etc, until eventually I had everything backed up. By doing so, I only stood to lose 10-12 hours’ worth of effort and bandwidth in the event of an interruption.

      I also verified that what you lose is *only* the latest data that was being backed up by that task – the existing backups already in the cloud are not affected at all.

      Reply
  2. Mathew Beall

    Just a quick note re: Amazon Drive – it *does* have a file size limit of 10GB – just so you are aware. I have many .m2ts files that I backup to Crashplan (with no limitation) – but those can’t move up to Amazon Drive. Might not be a problem for most people – but thought I would mention it.

    Reply
    1. Dimi

      Lovely! It won’t affect me but giid to know. I have now cancelled my Amazon drive service so after the 3 month free trial I won’t continue.

      Reply
    2. Jon Etkins

      This may affect CloudSync, but it does not affect HyperBackup, which breaks things up into manageable chunks before it sends them to whatever storage media you have chosen. I have many files much larger than 10GB, and they’re all backing up to Amazon just fine via HyperBackup.

      Reply
      1. Dimi

        My HyperBackup to Amazon drive fails regularly with message can’t reach Amazon… Can’t figure out why.
        Also without any changes to the backup set it takes 8 minutes to backup. May be this is normal but annoying still.

      2. Jon Etkins

        It’s an unattended background process – why does it matter that it takes eight minutes to back up even if there have been no changes. It may be checking for expired versions, or performing some other sort of housekeeping, but why care since it’s only impacting you if you’re sitting there watching it.

      3. Dimi

        Because I only have 130GB of data for now and about to grow that rapidly over the next few months. If the backup window for 130GB of data with 0 changes is 8 min then what would 1TB or more look like with or without changes…
        CP, as much as I hate the product, seems to do have a different approach.

      4. Nick

        Crash plan works differently and it can still take time.

        I’ve got 550gb in a hyper backup and it does t take long to run when there is nothing new to add

      5. Jon Etkins

        I now have a total of 1.7TB backed up to Amazon Drive via HyperBackup, and I only started using it three weeks ago – to back up that same amount of data via Crashplan would have taken about a year, I reckon. (I actually used CrashPlan’s “send me a disk drive” option to seed my backups onto their cloud servers, but that’s another story and I hear that they no longer offer that service anyway.)

        Those 1.7 terabytes are broken up into several smaller HyperBackup jobs, each backing up a specific portion of my NAS. The largest – our Homes and a family Shared folder – holds almost 900GB, and last night’s backup, which backed up just over 1GB of new and changed files, took just over five minutes including version rotation.

        Which suggests that the amount of time taken to back up only minor changes seems to be pretty much unaffected by the amount of data already backed up so you can rest easy there, Dimi.

  3. George

    DJ214se – ARM based

    Today CP on the NAS tried to download an update to 4.8. I have set CP to prevent the upgrade (as outlined in earlier post by Per), and that works, but it tries to download and update every hours, logging each attempt. It’s unlikely to stop these attempts, so I might need to move CP off the NAS and use shared drive approach.

    Reply
  4. Troy

    Has anyone been able to get 480_223 work? If not, how are you still running with the previous version that Patters released? I’ve been down for about a week now and would really like to get this resolved.

    Reply
    1. Dimi

      No update from patters yet.

      Uninstall CP
      Install CP from patters, don’t star the package
      Apply the block update fix mentioned here chmod 444 update, etc
      Start CP
      Wait for sync
      Wait for patters to update package

      Reply
      1. Troy

        Thanks Dimi – I figured that people had to be blocking the update or there would have been a ton more comments saying they were down as well.

  5. Singularity

    All,

    The last bunch of times mine auto-upgraded I just did these quick steps and was back and working within 5 mins!

    1. Stop Crashplan
    2. ssh in
    3. If you don’t have cpio in /bin do this:
    cd /bin
    ln -s /var/packages/CrashPlan/target/bin/cpio
    3. cd /var/packages/CrashPlan/target/upgrade/LATESTUPGRADEDIR
    4. cat ./upgrade.cpi | gzip -d -c – – | cpio -iv
    5. Restart CP and you should be golden!

    I’m using ssh keys to ssh into my nas as the root user, if you’re admin or some other user you’d want to do step 4 as:
    4. sudo cat ./upgrade.cpi | gzip -d -c – – | cpio -iv

    The thing that takes the most time is rm’ing the 43579345789435789 upgrade dirs that a created during the failing upgrade process. After that it’s a super quick fix!

    Good luck!

    Reply
    1. frold

      I get this error?

      admin@DS212j:/var/packages/CrashPlan/target/upgrade/1435813200480_323.1478626570525$ sudo cat ./upgrade.cpi | gzip -d -c – – | cpio -iv
      gzip: –.gz: No such file or directory
      gzip: –.gz: No such file or directory
      cpio: premature end of archive

      Reply
      1. Singularity

        I guess try to type those commands out instead of doing a cut’n paste, might have better luck.. And you DO have sudo available, right?

        Also, it looks like it’s a good idea to rename the upgrade script in that dir as well:

        mv upgrade.sh upgrade.sh.old

        That seems to keep it from trying to upgrade over and over..

        Good luck!

  6. rdamazio

    YMMV, but in the process of trying to get the new version to work, I renamed /volume1/@appstore, and “Repair” immediately showed up in the package manager – that worked and even migrated most of my config (except my archive password) over. Working well since.

    Reply
  7. cow-brat

    I can’t see a CP package in my Synology Package Center. Who knows, what is the problem and can I download it manually from anywhere?

    Reply
    1. Matt

      Looks like your Syno architecture is no longer supported, just like mine (DS 413).

      It would be great if patters could post a link to manually download the latest version which still works with ARM.

      Reply
      1. Matt

        RE: Jon

        Funny, cannot find that link.

        Nevermind, I switched to Windows Version Backup, which hopefully breaks less often than CP.
        Was good while it lasted, but live goes on.

  8. Chris

    Hi i had to install crashplan again and see its now missing from your list of applications in the sinology community package centre. Do you have plans to add it back ?
    Thanks
    Chris

    Reply
  9. Chris Coastie

    Just spotted the end of non-intel support, looks like it’s time to find another provider. Backblaze B2 seems like a good one to try from the comments above.

    Patters – Thanks for all your help over the last year or so :)

    Reply
    1. Tom O'Neill

      Yeah, Patters really kind of buried the lede when he hid the loss of ARM support in the release notes. What a shame! Patters, it was great while it lasted, thanks!

      Reply
      1. Chris Coastie

        Can report that using backblaze B2 works well. I’m using Synology Hyper Backup to create compressed backups, which I then sync to B2 using the Cloud Sync app. Upload rate from here in the UK is significantly faster (being throttled by my internet connection as opposed to the service as tended to happen with Crashplan) and at $0.005/gb/month its going to work out a little bit cheaper (ignoring my 6 months of remaining CP subscription). more importantly, all officially supported!! :)

      2. Dimi

        Not sure how I can reply to Chris Coastie’s message so hopefully Chris Coastie if you see this please reply.
        What is the reason you do hyper backup and then sync to B2? I think I know the answer but just to be 100% clear. If I understand correctly hyper backup will give you backup, unlike B2 which only does a sync, and when this entire backup set is synced it will sync the latest version of the backup set which should contain all your backups with versions going back to whenever you started backing up? So in case you needed to restore you would go to the local backup and if that’s destroyed you would download the latest sync of your backup set and restore at any point in time. Did I understand correctly? Also can you post the config of you backup and sync tasks?

  10. Dale

    Just so you with non-intel systems know, if you install Crashplan on your PC or Mac for the individual user instead of all users, you can also backup network locations, including mapped network drives on the Synology. The downside is your machine will need to be on when backups are running, but it will allow you to backup both your machine and any other important data on the network, while only being considered one machine, which matters depending on which Crashplan subscription you use.

    Reply
  11. Hal Sandick

    migrate from the non-intel 213+ to an intel 216+II. Synology migration instruction worked well. Used same drives no data loss (AFAIK) — everything went very smoothly.

    Crashplan working again.

    hal

    Reply
  12. Duc

    Has anybody upgraded to the DSM 6.1 beta? The release notes says packages will need re-initialization after upgrading to DSM 6.1 Beta. I wonder if that will break the package.

    Reply
  13. WiFied

    Patters – will there be an update soon for those of us who have donated and who have Intel systems? Thanks.

    Reply
    1. ajwillmer

      I am running CP just fine. patters CP(4.8.0-0042) install, DS1815+, DSM 6.0.2-8451 Update 3, 6144 MB Ram, Synology Java install (1.8.0_111), JAVACOMMON=${JAVA_HOME}/bin/java, USR_MAX_HEAP=3072M

      Reply
      1. WiFied

        Using Synology Java. CP quit working a couple of weeks ago before DSM Update 3 was released. I’m using the same java heap size as you. Been using this for years. Just reinstalled CP on the Synology (stopped it and restarted it a couple of times). Updated my desktop CP client, just in case that was the problem. But still not working. I’ll update the token and see if that helps.

      2. WiFied

        OK – it’s working. Just so that everyone is clear on what I did:
        – Update DSM
        – Uninstall and reinstall CP on Synology (stop/restart package a couple of times)
        – Download latest CP client and install on desktop PC (the one that I had before was the same version as the new one I downloaded, but the file size was different, so I installed the new one)
        – Update the Authentication Token as outlined above in CrashPlan Client Installation instructions

        To update java heap size:
        – With Crashplan running on Synology
        – Log in via Putty SSL, and paste:
        sudo vi /var/packages/CrashPlan/target/syno_package.vars
        – Type i (letter i) – to insert text at beginning of line
        – Type or paste this exact text, changing numbers to the multiple of 1024 that you want to set as your java heap size:
        USR_MAX_HEAP=1024M
        – Press enter or return key
        – Press ESC key to get back into command mode
        – Type :wq (colon w q) to write and quit
        – Stop and Start Crashplan on Synology
        – Open Crashplan client on Windows computer
        – Double click on Crashplan icon on top right corner
        – Type java mx – to confirm change to Java Heap size

        Thanks for your help, ajwillmer!

  14. Brian Fox

    After the last update, I had to disable inbound as it messed with the archive locations and all my remotes where starting over. No when I click attach archive, the window never loads the folders, it just sits (I left it for hours). I’m using a dedicated java, not the global one. Anyone else seen this?

    Reply
  15. Greg

    Hello Patters,
    My crashplan is still not working since the last update. I was waiting for your update, but as I don’t see it coming, I was wondering if you plan to release one? Otherwise, how can we make it working again and being sure that we won’t have to reupload everything to crashplan servers?
    Thanks

    Reply
  16. Greg

    actually, I just uninstall and re-install Crashplan on the NAS.
    It’s working now, I should have done this much earlier…

    One question though: Do I need to block auto-update or is this gonna be fine, and what’s about/ what’s the point of the java heap size trick that WiFied talked about before?

    Thanks.

    Reply
  17. perry

    Hi,
    My CP stopped working on Oct.30.
    I just removed Java7 and installed Java8 SE Embedded, and restarted CP, this did not help.
    I cannot upgrade CP to v.4.8.0 because it does not show up in your repository.
    Your other packages are showing, but not CP.
    Any idea why?

    Reply
    1. George

      If you are using a non-Intel Synology model, CP no longer runs on that platform because of changes that CP has made in 4.8. As a result, CP does not show in community packages. Patters has an archive of older versions, so you can download 4.7 if you wish. The archive location is referenced in earlier posts.

      Reply
  18. Eamon

    I’m one of the unfortunates with a non-Intel Synology. I installed the older version on my DS213 and all seems fine at first however Crashplan automatically updates to V4.8 after a short while. Anybody know how to Block Crashplan upgrading from 4.7 on Synology and Windows client that I use to view activity?

    Thanks!

    Reply
    1. Dimi

      Can’t find the thread with my phone but search here and you will find it.

      In general what you need to do is rename the existing update folder to say update.tmp. Create a new one called update. Then do chmod 444 update so CP won’t be able to use that hence update will be blocked.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s