CrashPlan packages for Synology NAS

UPDATE – CrashPlan For Home (green branding) was retired by Code 42 Software on 22/08/2017. See migration notes below to find out how to transfer to CrashPlan for Small Business on Synology at the special discounted rate.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient, particularly against the threat of ransomware.

There are now only two product versions:

  • Small Business: CrashPlan PRO (blue branding). Unlimited cloud backup subscription, $10 per device per month. Reporting via Admin Console. No peer-to-peer backups
  • Enterprise: CrashPlan PROe (black branding). Cloud backup subscription typically billed by storage usage, also available from third parties.

The instructions and notes on this page apply to both versions of the Synology package.

CrashPlanPRO-Windows

CrashPlan is a Java application which can be difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created several others. It has been through many versions since that time, as the changelog below shows. Although it used to work on Synology products with ARM and PowerPC CPUs, it unfortunately became Intel-only in October 2016 due to Code 42 Software adding a reliance on some proprietary libraries.

Licence compliance is another challenge – Code 42’s EULA prohibits redistribution. I had to make the Synology package use the regular CrashPlan for Linux download (after the end user agrees to the Code 42 EULA). I then had to write my own script to extract this archive and mimic the Code 42 installer behaviour, but without the interactive prompts of the original.

 

Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    Community-packages
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package, and an Intel CPU is required.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 8 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • To install CrashPlan PRO you will first need to log into the Admin Console and download the Linux App from the App Download section and also place this in the ‘public’ shared folder on your NAS.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java or CrashPlan PRO app files recognised by this package, try downloading them with Firefox. It seems to be the only web browser that doesn’t try to uncompress the files, or rename them without warning. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This will configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
 

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • The Linux CrashPlan PRO client must be downloaded from the Admin Console and placed in the ‘public’ folder on your NAS in order to successfully install the Synology package.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. Run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding (0.0.0.0 means the server is listening for connections on all interfaces) e.g.:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,0.0.0.0
    port,--------------token-----------------,binding

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address.
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,192.168.1.100
    assuming that the Synology NAS has the IP 192.168.1.100
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following files:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
    /var/lib/crashplan/.ui_info (Synology) – this value does change spontaneously if there’s a port conflict e.g. you started two versions of the package concurrently (CrashPlan and CrashPlan PRO)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items

 

Migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO)

  • Leave the regular green branded CrashPlan 4.8.3 Synology package installed.
  • Go through the online migration using the link in the email notification you received from Code 42 on 22/08/2017. This seems to trigger the CrashPlan client to begin an update to 4.9 which will fail. It will also migrate your account onto a CrashPlan PRO server. The web page is likely to stall on the Migrating step, but no matter. The process is meant to take you to the store but it seems to be quite flakey. If you see the store page with a $0.00 amount in the basket, this has correctly referred you for the introductory offer. Apparently the $9.99 price thereafter shown on that screen is a mistake and the correct price of $2.50 is shown on a later screen in the process I think. Enter your credit card details and check out if you can. If not, continue.
  • Log into the CrashPlan PRO Admin Console as per these instructions, and download the CrashPlan PRO 4.9 client for Linux, and the 4.9 client for your remote console computer. Ignore the red message in the bottom left of the Admin Console about registering, and do not sign up for the free trial. Preferably use Firefox for the Linux version download – most of the other web browsers will try to unpack the .tgz archive, which you do not want to happen.
  • Configure the CrashPlan PRO 4.9 client on your computer to connect to your Syno as per the usual instructions on this blog post.
  • Put the downloaded Linux CrashPlan PRO 4.9 client .tgz file in the ‘public’ shared folder on your NAS. The package will no longer download this automatically as it did in previous versions.
  • From the Community section of DSM Package Center, install the CrashPlan PRO 4.9 package concurrently with your existing CrashPlan 4.8.3 Syno package.
  • This will stop the CrashPlan package and automatically import its configuration. Notice that it will also backup your old CrashPlan .identity file and leave it in the ‘public’ shared folder, just in case something goes wrong.
  • Start the CrashPlan PRO Synology package, and connect your CrashPlan PRO console from your computer.
  • You should see your protected folders as usual. At first mine reported something like “insufficient device licences”, but the next time I started up it changed to “subscription expired”.
  • Uninstall the CrashPlan 4.8.3 Synology package, this is no longer required.
  • At this point if the store referral didn’t work in the second step, you need to sign into the Admin Console. While signed in, navigate to this link which I was given by Code 42 support. If it works, you should see a store page with some blue font text and a $0.00 basket value. If it didn’t work you will get bounced to the Consumer Next Steps webpage: “Important Changes to CrashPlan for Home” – the one with the video of the CEO explaining the situation. I had to do this a few times before it worked. Once the store referral link worked and I had confirmed my payment details my CrashPlan PRO client immediately started working. Enjoy!
 

Notes

  • The package uses the intact CrashPlan installer directly from Code 42 Software, following acceptance of its EULA. I am complying with the directive that no one redistributes it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
    /var/packages/CrashPlan/target/log/engine_output.log
    /var/packages/CrashPlan/target/log/engine_error.log
    /var/packages/CrashPlan/target/log/app.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_PATH="http://download2.code42.com/installs/linux/install/${SYNOPKG_PKGNAME}"
CP_EXTRACTED_FOLDER="crashplan-install"
OLD_JNA_NEEDED="false"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.*_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  CP_EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
  OLD_JNA_NEEDED="true"
  [ "${WIZARD_VER_483}" == "true" ] && { CPPROE_VER="4.8.3"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER="3.6.1.4"
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
  DOWNLOAD_FILE="CrashPlanPROe_${CPPROE_VER}_Linux.tgz"
fi
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "rtd1296" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tar.xz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-old-${SYNO_CPU_ARCH}.tar.xz"   
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
[ "${OLD_JNA_NEEDED}" == "true" ] && INSTALL_FILES="${INSTALL_FILES} ${OLD_JNA_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  DOWNLOAD_URL="http://tinyurl.com/javaembed"
  EXTRACTED_FOLDER="ejdk1.8.0_151"
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BINARY="ejdk-8u151-linux-armv6-vfp-hflt.tar.gz"
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BINARY="ejdk-8u6-fcs-b23-linux-ppc-e500v2-12_jun_2014.tar.gz"
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
    EXTRACTED_FOLDER="ejdk1.8.0_06"
    DOWNLOAD_URL="http://tinyurl.com/java8ppc"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BINARY="ejdk-8u151-linux-i586.tar.gz"
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BINARY="jre-8u151-linux-x64.tar.gz"
    JAVA_BUILD="Linux x64"
    EXTRACTED_FOLDER="jre1.8.0_151"
    DOWNLOAD_URL="http://tinyurl.com/java8x64"
  fi
fi
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile


pre_checks ()
{
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    )

    JAVA_BINARY_FOUND=
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.gz ] && JAVA_BINARY_FOUND=true
     
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi
  else
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1
      fi
    fi
  fi
}


preinst ()
{
  pre_checks
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    cd ${PUBLIC_FOLDER}
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      #Chrome
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      #Safari
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    fi
    mv ${EXTRACTED_FOLDER} ${SYNOPKG_PKGDEST}/jre-syno
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  fi
   
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}
  
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  #are we transitioning an existing CrashPlan account to CrashPlan For Small Business?
  if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ]; then
    if [ -e /var/packages/CrashPlan/scripts/start-stop-status ]; then
      /var/packages/CrashPlan/scripts/start-stop-status stop
      cp /var/lib/crashplan/.identity ${PUBLIC_FOLDER}/crashplan-identity.bak
      cp -R /var/packages/CrashPlan/target/conf/ ${OPTDIR}/
    fi  
  fi

  exit 0
}


preuninst ()
{
  `dirname $0`/stop-start-status stop

  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  [ -e ${OPTDIR}/lib/libffi.so.5 ] && rm ${OPTDIR}/lib/libffi.so.5

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

 exit 0
}


preupgrade ()
{
  `dirname $0`/stop-start-status stop
  pre_checks
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk


TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
ENGINE_CFG="run.conf"
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
OPTDIR="${PKG_FOLDER}/target"
PID_FILE="${OPTDIR}/${DNAME}.pid"
DLOG="${OPTDIR}/log/history.log.0"
CFG_PARAM="SRV_JAVA_OPTS"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
FULL_CP="${OPTDIR}/lib/com.backup42.desktop.jar:${OPTDIR}/lang"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile


start_daemon ()
{
  #check persistent variables from syno_package.vars
  USR_MAX_HEAP=0
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  fi
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/
  fi

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to start-stop-status.sh from installer.sh because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
    fi
  done

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/restartLinux.sh

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the chrisnelson.ca method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
 
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0
  fi

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/libffi.so.5 ] && cp -f ${OPTDIR}/bin/libffi.so.5 ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/libjtux.so ] && cp -f ${OPTDIR}/bin/libjtux.so ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/
  fi

  #create or repair libffi.so.5 symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/libffi.so.5 ]; then
    if [ ! -e /lib/libffi.so.5 ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
      ln -s ${OPTDIR}/lib/libffi.so.5 /lib/libffi.so.5
    fi
  fi

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80
  elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192
  elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384
  elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512
  elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024
  fi
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
    JAVA_MAX_HEAP=${USR_MAX_HEAP}
  fi   
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
    $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
  fi
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
  fi

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
     
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars
    fi

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Djava.net.preferIPv4Stack=true\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
   else
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
  fi

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    fi
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

  else
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars
  fi

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    fi
  else
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1
  fi
}

stop_daemon ()
{
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/
}

daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}

wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

install_uifile & upgrade_uifile

[
  {
    "step_title": "Client Version Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
          {
            "key": "WIZARD_VER_483",
            "desc": "4.8.3",
            "defaultValue": true
          },          {
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_3614",
            "desc": "3.6.1.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
          }
        ]
      }
    ]
  },
  {
    "step_title": "Java Runtime Environment Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
          {
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
          },
          {
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true
          }
        ]
      }
    ]
  }
]
 

Changelog:

  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0047 30/Oct/17 – Updated dedicated Java version to 8 update 151, added support for additional Intel CPUs in x18 Synology products.
  • 0046 26/Aug/17 – Updated to CrashPlan PRO 4.9, added support for migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO). Please read the Migration section on this page for instructions.
  • 0045 02/Aug/17 – Updated to CrashPlan 4.8.3, updated dedicated Java version to 8 update 144
  • 0044 21/Jan/17 – Updated dedicated Java version to 8 update 121
  • 0043 07/Jan/17 – Updated dedicated Java version to 8 update 111, added support for Intel Broadwell and Grantley CPUs
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library libc42archive.so
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of start-stop-status.sh
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the chrisnelson.ca method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required
    libjnidispatch.so taken from Debian JNA 3.2.7 package with dependency on newer libffi.so.6 (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release
 
 

6,692 thoughts on “CrashPlan packages for Synology NAS

  1. Chris's avatarChris

    Hi!

    After updating to DSM 4.1 final I now get an error message when trying to connect to the NAS via SSH.

    channel 3: open failed: administratively prohibited: open failed

    Reply
    1. PeterWR's avatarPeterWR

      Hey, i have exactly the same issue.

      same message when i connect via SSH
      channel 3: open failed: administratively prohibited: open failed

      Reply
    2. KB's avatarKB

      I also have the same issue.

      Whilst here I do want to thank the package author though for the effort, this is much better than the old way of getting this to work.

      Reply
    3. captnmorg's avatarcaptnmorg

      also getting the same error here. DS212J. I attempted to upgrade the Java 7 client, that didnt work so downgraded to Java 6. also uninstalled/reinstalled the package. no firewall/av client running on the machine I am trying to SSH in from. the crashplan package opens, but java doesn’t take the memory like it used to.

      Reply
      1. captnmorg's avatarcaptnmorg

        After some playing around tonight I got it working. Not quite sure what specifically got it working but here is what I did. Hopefully this helps someone!

        1. Remove old PC Load Letter repo from package center on NAS
        2. Add new repo to package center – http://packages.pcloadletter.co.uk
        3. Install Java 7 ARM5/Headless/Little Endian (using repository, placing file in public folder)
        4. Install remove and reinstall Crashplan Package on the NAS
        5. Run SSH terminal command for port forwarding – ssh -L 4200:localhost:4243 user@10.1.x.x (from Mac OS Client)

        After this I still received the channel 3 error.

        Next, I shut down Stopped crashplan on the NAS and closed on the Mac, commented out the ServicePort line in the u.properties file on the mac connecting the NAS and removed the comment on serviceHost line in ui.properties, then changed the IP from 127.0.0.1 to the IP address of my NAS.

        After doing this and opening Crashplan it hung for a few minutes and then it prompted me to login. After putting my credentials in and then waiting a few mins, I did not see the channel 3 error in the terminal window. Fingers crossed and a little more waiting I was prompted with a Crashplan login screen. After logging in I was prompted with a blank Crashplan screen like a new installation. A few seconds later the adopt old backup link appeared on top. I was able to adopt the old backup of my NAS and it is now scanning but it looks like it will work ok.

        I am not sure if all of these steps need to be followed or not since I had left the SSH tunnel from the Mac to the NAS open the whole time. I don’t want to kill it to see what happens but you may be able to get by without it since the ui.properties file is now connecting to the IP of the NAS instead of forwarding over the open port.

    4. KB's avatarKB

      I think captnmorg cracked this. The key is to edit the iu.properties (on a Mac: /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties).

      1) uncomment serviceHost=
      2) comment out #servicePort=4200 (add the #)

      Previously to connect to the headless client you had to SSH with port forwarding. It appears that the new desktop client simply allows you to connect using the engine IP address.

      I had to login using my crashplan credentials, but once I logged in it looks like it has immediately adopted my old backups and is getting on with it.

      Thanks CaptnMorg!

      Reply
  2. John's avatarJohn

    I have an issue following an upgrade to DSM 4.1. I updated the Java SE plugin, but it couldn’t find the tgz, and removed the package. CrashPlan did the same thing because it couldn’t find Java. Now that I have re-installed Java SE & the CrashPlan package, I am unable to connect to the backup engine. It’s running, although from the resource monitor shows it is not taking up it’s usual massive chunk of memory. Any ideas?

    Thanks!

    Reply
  3. pitcher's avatarpitcher

    Somebody who knows what to do for my security-issue?
    Could it be the java-version? I use the latest DSM 4.1 and java 7 embedded.

    Reply
  4. garulfo2's avatargarulfo2

    I can’t add your repo in my Synology repo list. Could you please update the article with a direct link to download the spk ?
    Thanks for your great work.

    Reply
      1. Christian Anton's avatarChristian Anton

        Hi!

        I also have issues adding the URL you mentioned in your text above to my DSM4.x driven DS411j. I guess it’s because the URL just refers to this blog post which is not parseable by DSM.

        I could offer you to host this repository for you. I could give you a simple FTP account from which you could manage the repo, or ssh or whatever. If interested, please contact me.

      2. patters's avatarpatters Post author

        Thanks for the offer, but I already switched the hosting last night. packages.pcloadletter.co.uk should resolve to 82.147.22.211. The redirect is intentional – my PHP code sends you to this page if you don’t specify POST data in the way a real Synology NAS would. Other wise it would return a blank page which people would interpret as it being broken.

      3. patters's avatarpatters Post author

        Hi Christian – actually you may be able to do me a favour in a different way :)
        As I’m not a German speaker I’m having trouble signing up to the German Synology Wiki to change my package repo URL as mentioned here:
        http://www.synology-wiki.de/index.php/Paketzentrum_Quellen

        When you try to make a new account it says something about the registration password being in the wiki but I couldn’t find it. Could you update the URL to packages.pcloadletter.co.uk for me by any chance?

        Thanks!

  5. Spiderv6's avatarSpiderv6

    Mine updated on the 31st (to 4.1 plus all the apps). Ran OK on the the 1st but now on the 2nd Crashplan is not running and clicking on ‘Run’ doesn’t do anything…..

    Reply
    1. Spiderv6's avatarSpiderv6

      Solved my issue. Had to re-install the Java package. No idea why. One day it was there, the next not…..anyway a re-install and then a re-start of Crashplan and all is good.

      Reply
      1. patters's avatarpatters Post author

        After a DSM upgrade you should always remove and re-install Java to fix the OS localization. Otherwise non-ASCII characters won’t be handled correctly.

  6. pitcher's avatarpitcher

    Update : It seems Java version 6 is working ok vs Java version 7. What is the difference between these installations??

    Reply
  7. KB's avatarKB

    > Somebody who knows what to do for my security-issue?

    I’m using JRE6 and I get the “channel 3: open failed: administratively prohibited: open failed” error when connecting to the headless client via ssh. Is this what you mean?

    Reply
  8. Daniel Fischer's avatarDaniel Fischer

    Someone please help. Does anyone get this error when you try to backup from computer to NAS?

    For some reason I am getting Destination unavailable – backup location is not accessible when I try to backup to my Synology from another computer.

    I notice that it’s bound at 192.168.0.185 on the box. Typically I connect via 192.168.0.187.

    Could that be related? Is it something else? It sees it on the App from both computers so I’m not sure what to do.

    Reply
  9. arachn1d's avatararachn1d

    Someone please help.

    When I backup from my laptop to NAS or any other computer to NAS I get:

    “Destination unavailable – backup location is not accessible”

    Any thoughts on why? Does this feature not work with this package?

    Reply
    1. patters's avatarpatters Post author

      I’ve had a chance to test backing up from CrashPlan on a PC to CrashPlan on my NAS and it worked fine. I used a different email address for each, and I simply used the friend code from the NAS and entered that code on the client to connect it.
      Furthermore, by tethering the client PC via my mobile phone I tested that you don’t even need to forward any ports on your home firewall.

      Reply
  10. Marten's avatarMarten

    Tip.

    Crashplan Pro’s cli.sh : (see the bottom of the following page: ) http://support.crashplanpro.com/doku.php/reference/proclient/commands

    works like a charm on my Synology DS1812.

    All you need to do is remove the “-XstartOnFirstThread” switch from the script.

    I can now just run crashplancli pause 60

    What’s the current word on upgrading both crashplan and java? Package manager fairly recently started offering to update both but I’m still a bit in the ‘if it works dont touch it’ mode. Is there anything I would actually gain from upgrading (changelog? security? critical bugfixes)

    Reply
    1. Marten's avatarMarten

      Actually “crashplancli pause 60” doesnt work completely- it pauses for 24 hours. I’m pretty sure it’s receiving the right command string. Anyone manage to get it to pause less than 24?

      Reply
  11. Frank's avatarFrank

    I really appreciate your effort to make crashplan work on the DS. My problem: I updated the crashplan server via the automatic updater in the package center of my diskstation. Now crashplan seems to have forgotten all files which had been already uploaded and started a new backup…

    Reply
    1. patters's avatarpatters Post author

      Are you on DSM 4.1? If so, stop the package and delete /var/services/homes/crashplan/.crashplan/.identity
      Restart CrashPlan, then connect with the client. You should be offered the chance to adopt an existing backup.
      Synology have slightly changed the behaviour of the user manager. Now user home directories aren’t deleted when user accounts are destroyed so I’ll need to update literally all my packages (which I haven’t had time to do yet). Please let me know if this fixes your issue.

      Reply
      1. bundyo's avatarbundyo

        For me – it didn’t – I still had to symlink libffi.so.5 again, since CrashPlan’s startup script didn’t seem to do it. Now it’s working again.

      2. Frank's avatarFrank

        Thanks for your quick reply! I’m on DSM 4.1 and delteted .identity. Unfortunately, I cannot switch to my old backup destination although it is shown. My data is only saved to the new (active) destination. Any suggestions?

      3. patters's avatarpatters Post author

        Load the client on your computer, then double click on the CrashPlan logo in the top right. A command prompt will show at the top. Notice there’s a command called GUID – use that with the GUID you got from looking at your account in CrashPlan Central.

      4. Frank's avatarFrank

        This is obviously the solution :-) Unfortuately, I deleted the online backup while tinkering around :-( Nevertheless, thank you for your great support!

  12. adminadventure's avataradminadventure

    Hi

    Since installing crashplan on my diskstation 412+ with DSM 4.1 the unit never sleeps/power saves. The disks are spinning all the time.

    Stopping the crashplan application on the unit through package manager allows the unit to sleep.

    I set the crashplan to check for versions every 12 hours, and allowed it to finish the first backup, but still the unit never sleeps with crashplan running.

    Has anyone else experienced this and have a work around?

    Reply
      1. cbush57's avatarcbush57

        I just read through the forum, and it seems your still having the polling issue when running just Crashplan. Can you confirm? Is there a fix yet? I really appreciate all your work on this. You’re the man!

  13. kushal's avatarkushal

    After upgrading the DMS to 4.1 version the package apps got disappeared. I try to remove the package link and reinstall it, got error invalid location. I was going through the thread and find another link for the package from patters but that too didn’t work.
    Any idea why the links are not working.

    Reply
  14. HvB's avatarHvB

    I had the same problem. Installed it on my machine 2 days ago: no problem. Yesterday I tried to add the package source on the machine of a friend of my and it couldn’t connect, same thing this morning. Seems like the server is not up anymore. :(

    Reply
    1. patters's avatarpatters Post author

      My free webhost suspended the pcloadletter.comlu.com account because they said it “overloaded the MySQL server”. Hardly likely given that the database table contains about 7 lines (one for each package).
      It’s possible that the DNS change to packages.pcloadletter.co.uk hasn’t propagated to your computer yet – I only did it this morning. Can you run:
      nslookup packages.pcloadletter.co.uk
      It should have the IP 146.255.37.1. If it displays some other IP, wait a few hours and try again. Can you let me know either way.

      EDIT – it was working for my NAS that already had this repo added to Package Center. But now I have deleted it I can’t re-add it. I’m seeing the same issue…

      EDIT2 – it has magically fixed itself without me changing anything. I guess there may be some automated processes going on in the hosting environment – but it should work now.

      Reply
      1. HvB's avatarHvB

        Tried both options, both don’t work yet. But at least I know af fix is underway and I’ll have to wait a couple of hours more for the DNS change to become active on my servers.

        Thank you for your time in making this package btw, works great!

      2. Ben Staffin's avatarBen Staffin

        The hostname is resolving to the correct IP when I check from my Synology box, but the repository still is not working for me.

        My box is in San Francisco, for reference.

      3. patters's avatarpatters Post author

        I think there’s some kind of intermittent issue with how the hosting company are doing a subdomain redirect in their systems. Normally the repo web service will redirect you to this actual blog page if you don’t supply POST data to the URL in the way that DSM Package Center does. To troubleshoot, I disabled this behaviour and instead got the PHP to return the whole POST data array. Then I observed this during a period when the repo was not working:
        SYNO> wget http://packages.pcloadletter.co.uk
        --18:57:44-- http://packages.pcloadletter.co.uk/
        => `index.html'
        Resolving packages.pcloadletter.co.uk... 146.255.37.1
        Connecting to packages.pcloadletter.co.uk|146.255.37.1|:80... connected.
        HTTP request sent, awaiting response... 302 Moved Temporarily
        Location: /?b11f2340 [following]
        --18:57:44-- http://packages.pcloadletter.co.uk/?b11f2340
        => `index.html?b11f2340'
        Connecting to packages.pcloadletter.co.uk|146.255.37.1|:80... connected.
        HTTP request sent, awaiting response... 302 Moved Temporarily
        Location: / [following]
        --18:57:44-- http://packages.pcloadletter.co.uk/
        => `index.html'
        Connecting to packages.pcloadletter.co.uk|146.255.37.1|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: unspecified [text/html]

        [ ] 10 --.--K/s

        18:57:44 (594.49 KB/s) - `index.html' saved [10]

        SYNO> cat index.html
        Array
        (
        )

        So where on earth did it get a 302 redirect to http://packages.pcloadletter.co.uk/?b11f2340 from?!

        I have decided to alter the hosting environment to use packages.pcloadletter.co.uk as its primary domain, rather than having that as a subdomain – so hopefully this will fix the issue. It may take up to 24 hours though :(

        If it’s not working tomorrow I’ll cut my losses and abandon this hosting company in favour of a more expensive one that I know works (the one that hosted the mirror that ran out of bandwidth).

      4. patters's avatarpatters Post author

        Ok. Oh dear. Looks like I did in fact get to the bottom of it. It’s GoDaddy.com’s ‘dirty little secret’ – they seem to do these intermittent 302 redirects systematically to hosted websites, which negatively impacts customers’ search rankings and royally screws up code which relies on POST data. When I suggested to tech support that perhaps the POST data was being lost they claimed not to have any idea how this could happen. However the problem seems to be endemic, but I guess most customers never notice:
        http://productforums.google.com/forum/#!msg/webmasters/M3JYnlFZ-TQ/KdAZp-BYUWwJ
        http://blog.onedevteam.com/2012/01/why-to-avoid-godaddy-web-hosting.html

        It’s well reported and most customers that encounter it seem to just give up dealing with support and leave for another hosting provider. Oh well, that’s wasted a little bit of cash but I guess if you want something done properly you’ve got to pay for it.

        I’ll move the hosting to 5quidhost.co.uk which was working nicely before it hit the bandwidth limit of the free plan.

  15. Mark's avatarMark

    Just upgraded DS411+ii to DSM 4.1, no issues with adoption, etc

    Have rebooted few times, yet still cant see packages to install or re-add

    Reply
      1. Mark's avatarMark

        Not had to reinstall anything;
        Crashplan, java, ipkg & packages….even the ipkg cron and it’s file all stayed and worked….

  16. patters's avatarpatters Post author

    I just heard back from Code42 support regarding the continuous polling every 5 minutes of conf/default.service.xml and unfortunately they gave me a ‘we don’t support CrashPlan on ARM architecture, nor on headless systems in general like Synology’ response. They offered a few suggestions like to work through these general Syno hibernation tips:
    http://forum.synology.com/enu/viewtopic.php?f=83&t=6852

    If I troubleshoot with syno_hibernate_debug_tool and then check /var/log/messages I see that this is the only filesystem activity:
    Aug 25 16:08:04 kernel: [621246.760000] [/volume1/@appstore/CrashPlan/conf/default.service.xml] opened by pid 26672 [u:(/volume1/@appstore/java6/jre/bi), comm:(java)]
    Aug 25 16:13:04 kernel: [621546.820000] [/volume1/@appstore/CrashPlan/conf/default.service.xml] opened by pid 26672 [u:(/volume1/@appstore/java6/jre/bi), comm:(java)]
    Aug 25 16:18:04 kernel: [621846.880000] [/volume1/@appstore/CrashPlan/conf/default.service.xml] opened by pid 26672 [u:(/volume1/@appstore/java6/jre/bi), comm:(java)]
    Aug 25 16:23:04 kernel: [622147.340000] [/volume1/@appstore/CrashPlan/conf/default.service.xml] opened by pid 26672 [u:(/volume1/@appstore/java6/jre/bi), comm:(java)]

    I don’t really see what we can do. This is probably originating in their code, and they can’t help. Any of you have any ideas? It’s frustrating because it’s pretty much the only thing left to fix.

    Reply
    1. Arjan's avatarArjan

      I want to “solve” this problem by running Crashplan only a few hours a day. To do so I have to schedule a start and stop of the Crashplan service. I tried to start the service from the CLI through “/volume1/@appstore/CrashPlan/bin/CrashPlanEngine start”. If I try to start the service that way, I cannot connect to it through the desktop client though. Even when I stop the service again and start if through the package center in the DSM GUI I cannot connect to the service from the Desktop client. I have to reboot my entire DS211 to be able again to connect through the Desktop client. When I connect the Desktop Client after an CLI service start (and stop) and reboot, I get prompted for both my Crashplan account and archive password. Apparently, the service start from the CLI is incorrect and results in a forgotten account and archive password.

      Does anyone know what the correct way is to start and stop the service from the CLI and cron?

      Reply
      1. Arjan's avatarArjan

        I found the solution myself: I was starting the service from the CLI with the root account. Root did not have access the configuration data of the specific “crashplan” account so it could not log in to CP.

    2. HvB's avatarHvB

      Really frustrating Patters. Perhaps I’m being naive here, but I guess people with NAS’es are more than other users concerned with proper backups, because they are more aware of the dangers involved in not doing proper backups. So I’m a bit puzzled why they don’t bother to support the platform in a proper way and are not showing any interest in helping you (and us…) out in any way.

      Maybe we can ask Synology to take it up with Code42?

      Reply
  17. matt's avatarmatt

    What is the default heapsize? I tried 1536 and 1024m and both basically kill the service– it can never start or connect…

    Also, I need to backup 2-3TB– I don’t care if its slow or needs to use swap space– how can I get this done on the DS411j?

    Reply
    1. patters's avatarpatters Post author

      512MB is the default if you were to install manually. Normally you would only need to increase it once the volume of backed up files starts to grow beyond say a terabyte. Realistically though, I doubt you’re going to be able to use it at all with 128MB RAM.

      Reply
      1. Jason Lee's avatarJason Lee

        Is there a way to force it to use 1024+mb for swap (not just physical ram)– I don’t care about the speed but there has to be a way to make this work. And you can’t upgrade the memory on this damn thing- its soldered onboard.

        Any ideas?

  18. Christian Anton's avatarChristian Anton

    I got CrashPlan working nicely on my DS411j now, there is just one thing not working yet: I try to back up some host TO the NAS but that fails, saying “backup location is not accessible”. On the NAS, I have chosen a directory for storage of incoming backups (which of course exists) and I have port 4242 forwarded from my router to the NAS. Anyone having had this problem?

    Reply
    1. richard's avatarrichard

      Is this on your internal network? Or only tried over WAN?
      Otherwise first try it on the internal network.

      Else try to make a new shared folder and give users full access rights and try again on the internal network.

      Reply
      1. Christian Anton's avatarChristian Anton

        I only tried over WAN. The source for backup would be my root server. I now tried removing the old directory on the NAS, create a new one via the Web GUI (shared folder) and give the user “crashplan” full access to it. Then I selected this directory in Crashplan Desktop Client and tried to start the backup from my root server to the NAS. The incoming backup connection appears on the NAS’ client but still prints the message: “The backup location is not accessible”.

    2. Flavio's avatarFlavio

      If you see the incoming connection on the NAS, it is not a router issue. I know you have given the crashplan user full access, but have you checked it on NAS DSM interface the folder? Is it really with the rights granted? And does Crashplan created any folder there already?

      Reply
      1. arachn1d's avatararachn1d

        The path is set by the installation: /volume1/crashplan/backupArchives

        I put read/write on everyone including guest and I still get location is inaccessible crap

      2. Flavio's avatarFlavio

        Please check the possibility of uninstalling the package and reinstalling. I’m not expert, but I would try this.

      3. Flavio's avatarFlavio

        Not sure if this helps, although it is not specific to your issue.. anyway:

        How It Works – Restore From Another Computer or Local Folder
        If you are experiencing this issue when restoring from your own destination or from a friend destination, you need to know the ID or the GUID of the destination in order to reset the cache.
        1. Find the Destination’s GUID
        Open the source computer’s app.log file in TextEdit or your favorite text editor. (Log file locations)
        Under DESTINATIONS, copy the destination’s GUID. This is the first item on the line, just before the destination’s name.

        2. Issue the backup.replace command
        Open CrashPlan on the source computer
        Double click the house logo in the upper right
        In the window that pops up, enter this text: backup.replace DESTINATION_GUID
        E.g. backup.replace 424274538329724242
        Press enter/return

  19. matt's avatarmatt

    So are there any other ideas for making the ds411j use swap space with crash plan? I can’t make this work with a 1024 or 1536m heap size– it still doesn’t work.

    Reply
      1. Nogger's avatarNogger

        Is there a chance to get a version for DS213+ (FreeScale-SPU)? I am about to buy a NAS and your package is the reason, why I choosed Synology. My preferred DiskStation is DS213+.

  20. Tom's avatarTom

    I have the crashplan headless client installed, and I can log in to it from my macbook pro just fine. After selecting the files/folders from the synology to backup and telling it to back it up to Crashplan Central, I get “waiting for connection” (Disconnected, retrying in < 1 minute). I've restarted the service, etc, and waited for 12 hours, and still nothing. The synology can certainly ping crashplan.com with no problem. Any suggestions?

    Reply
    1. patters's avatarpatters Post author

      Is your firewall allowing the CrashPlan traffc out? Have a look at the documentation at CrashPlan.com (linked to from my Notes section above)- it will tell you which ports are needed.

      Reply
    2. Marten's avatarMarten

      Also, is your firewall portforwarding traffic in?

      as far as I know this shouldn’t be needed, but I found it helps sometimes. For other machines to be able to connect to your crashplan instance they try to connect to “your ip” port 4242.

      Meaning, on a router/firewall you need to map port 4242 to :4242

      Reply
    3. Tom's avatarTom

      I didn’t set up port forwarding, 4242 is open, and the firewall is letting crashplan out. At some point (about 12 hours after the problem started), it started transferring fine… Weird. Maybe the problem was on crashplan’s end?

      Reply
  21. Marten's avatarMarten

    (ugh, wordpress is removing some stuff..)

    you need to map

    RouterExternalPort 4242 to forward all traffic to ServerPort 4242

    Note that as I said this shouldn’t strictly be needed for just backing up to Crashplan Central but I found CP to be a lot more stable and predictable that way. Take with a grain of salt I might be wrong but worth a try, yes?

    Reply
  22. Daniel Fischer's avatarDaniel Fischer

    It’s taking forever for the service to file synchronize. I’m on a DS412+. I’m only at 500gb. It seems like Crashplan/this is pointless if you have anywhere near 1tb or more. It seems like there’s just not enough memory on these units. Is that right? Are there any alternatives to off-site backup?

    Should I try increasing heap size?

    Patters:

    Any word on the location is not accessible stuff?

    Reply
      1. Mark's avatarMark

        Location not accessible, for me, is usually security changes in the backup folder or a subfolder.

        Try;
        Chmod -r 777 backup_location_path

  23. Ram Rangaswamy (@thrawnis)'s avatarRam Rangaswamy (@thrawnis)

    FIXED!

    My problem with CrashPlan no longer working on my Synology DS1812 since the upgrade to DSM 4.1 (couldn’t connect properly with CP software on my computer to the headless NAS, CP Engine would not start properly, etc.) has been resolved!!

    I am a Linux noob but tried to remove and re-install CP from the NAS in hopes to solve the problems I described above. In the process, I noticed ipkg was no longer running/installed. Once I installed ipkg using the instructions here: http://forum.synology.com/wiki/index.php/Overview_on_modifying_the_Synology_Server,_bootstrap,_ipkg_etc

    The CP client software connects properly again to the CP headless engine on the NAS. Unfortunately the CP engine is not starting after reboots automatically so I have to manually start it after each restart.

    Reply
    1. Flavio's avatarFlavio

      Man if you have made everything of what is stated on the blog, nobody knows what is the issue with your instalation. Otherwise someone would be helping you.
      I know it is bad this situation but you will need to debug more by yourself or with some local friend of you that can go to you home and see whats going on.

      Reply
  24. Erik's avatarErik

    Today, I migrated from DS409 to DS412+. I uninstalled both Java en Crashplan and then reinstalled both packages. The headless Crashplan is running now on the DS412+ but I cannot connect from my desktop client. I’m using the right IP address in the conf file.

    Any clue of what I’m doing wrong?

    Reply
      1. Kieren's avatarKieren

        Hi Erik,

        Can you tell us what you did to fix this? I still have the problem and have reloaded ipkg and the crashplan and java packages without luck…

        Thanks!

      2. Erik's avatarErik

        When CrashPlan was running on my DS409 I could simply change the IP address of the serviceHost in the conf/ui.properties of the desktop client (an iMac) to the IP address of my DiskStation (as mentioned by patters in the installation notes of this page). However, when I migrated to my new DS412+ this didn’t work anymore. I now follow the instructions given by CrashPlan to connect through a SSH tunnel. This works fine although I have to create a tunnel each time I want to connect to the headless client.

        I preferred the old situation. Any suggestions how to use that method are welcome…

      3. Serge's avatarSerge

        Hello,

        By default the crashplan server engine is listening on ‘localhost:4243’ (you can check with netstat).
        This means that there is no chance to make a connection on it from another host.

        In order to fix that you can update the my.service.xml configuration file on the headless server and change 127.0.0.1 by 0.0.0.0.
        Doing that a ‘netstat -a’ will display ‘*.4243’ instead of ‘localhost:4243’ and then a client running on another host should be able to target it.

        Regards,
        Serge

      4. patters's avatarpatters Post author

        Take a look at the scripts in my blog post: line 125 in start-stop-status.sh will try to make this change to my.service.xml every single time you start CrashPlan, so I’m not sure how some of you are ending up with this problem. I can only suspect that you’re editing this file manually while logged in as root. If you do that, you must remember to set the ownership back to the crashplan daemon user afterwards:
        chown -R crashplan /volume1/@appstore/CrashPlan

      5. Serge's avatarSerge

        Hi patters,

        In fact my post was just to clarify what had to be changed to make the server accept connections from other hosts.
        On my side I use Crahsplan on a Goflex plugcomputer and then I don’t use your package / script (anyway it looks great for Synology users).

        I was just found this post googling for Crasplan :-)

        Best regards,
        Serge

  25. Pingback: Synology DS1511+ and CrashPlan « rebelpeon.com

  26. Michael's avatarMichael

    Hi patters, firstly, thanks for putting in so much effort to maintain this excellent package. It’s so handy to not have to crawl through broken glass to get crashplan on the synology!

    My question is around crashplan 3.2.1 on DSM 4.1. I had it running perfectly on 4.0 something or other, however just upgraded to 4.1 and have faced a host of issues with it.

    Firstly, I have tried countless times uninstalling / rebooting / reinstalling both java and crashplan, and removing the crashplan folder (and thus .identity) in homes. I run a DS1812+.

    The problem is that when I first install crashplan it runs perfectly, and can be accessed from another machine without tunnelling. I can log in and adopt np. As soon as I reboot however Crashplan won’t start. On inspection, engine_output.log throws a warning that it cannot delete service.login or service.model.

    In the end to get it working, I need to chmod 777 the crashplan install so that those errors dont appear, then manually start crashplan via ./CrashPlan start. Because I can’t access crashplan remotely unless it runs as crashplan user, I then need to ./Crashplan stop, chown the Crashplan folder as crashplan user, and then start it from DSM.

    As such, I have put all of the above commands into /etc/rc, and it now works on reboot, with the slight exception that I manually need to enter the encryption key every time.

    So long and short of it, is it is a bit messy, and clearly there are some permission issues going on here. Can you offer any insights on a better approach to dealing with it?

    Thanks mate.

    Reply
    1. Arjan's avatarArjan

      I had similar issues when I tried to start and stop Crashplan from cron. Until I realised that I was starting it under root. I you you start Crashplan under root it cannot find login details which are stored the home directory of the crashplan user. Try to start it from the CLI or /etc/rc with the following command:

      sudo -u crashplan /volume1/@appstore/CrashPlan/bin/CrashPlanEngine start

      In my case it still did not work then because the JAVA_HOME environment variable was not loaded.

      I solved that through by calling a shell script startCrashplan.sh with the following contents:

      . /etc/profile
      /volume1/@appstore/CrashPlan/bin/CrashPlanEngine start

      I called that script from cron with user crashplan and then it worked. From /etc/rc it would be
      sudo -u crashplan /volume1/@appstore/CrashPlan/bin/startCrashPlanEngine.sh

      I am not a big Linux expert so there might be better ways but this worked for me.

      Reply
      1. Paul's avatarPaul

        Hi – I have just followed your instructions to get CrashPlan working on my new DS213. Everything works fine, except for one feature; changing the backup window time.

        Via the CrashPlan GUI I used to be able to select the times during which the backup was going to run. This feature is still available, but it doesn’t seem to work.

        So I tried following your instructions above to turn the engine on/off via cron at selected times. It turns the engine off fine and says that it is also turning the engine back on, yet it fails to do so.

        There are no error messages that I can see – any ideas?

      2. Arjan's avatarArjan

        @Paul: I made a reply to my own message because I see no reply button under your message (yet):

        I think you must set the backup window setting in the CP GUI to “always” when you use a cron job to run the CP server at certain times.

        When you for instance run your CP server via cron between 1AM and 4AM, you would like to make sure that the CP server will actually work during that time. If you have set a backup window in the GUI that does not overlap (for instance 6PM-8PM) with the 1-4 time period, the backup will never run. From 1-4 the server will be up, but it will no backup because it waits for the backup 6-8 window. When that window occurs the CP is stopped by again by cron.

        BTW: I did not mention that in my post but I also have cron job to stop the CP server again. My cron files looks like this:

        #minute hour mday month wday who command
        0 1 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 * * crashplan /volume1/@appstore/CrashPlan/bin/startCrashplan.sh
        0 4 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 * * crashplan /volume1/@appstore/CrashPlan/bin/stopCrashplan.sh

        As you can see I run the CP service from 1-4AM on every second day. That is enough for me.

        I changed the location of the cron file as well since my original post. It is now in /var/spool/cron/crontabs and the name of the file is crashplan. The reason for this change is that the /etc/crontab is reset by Synology after a reboot.

      3. Paul's avatarPaul

        Apologies – not sure how to put the reply button on the page … it’ll probably happen this time as well !!

        Anyway I’ve managed to solve this.

        It seems the reason the engine wasn’t starting again was something to do with the profile not running correctly. So I tried using “su” instead … this is how I got this work (you need to ensure you specify the shell):
        # su – crashplan -s /bin/ash -c /volume1/@appstore/CrashPlan/bin/CrashPlanEngine start

        I have the GUI set up as you mentioned as always on.

        Then via the cron, I have 2 scripts. 1 which starts the engine at say 1am, and another script that stops it at 8am.

        So my cron looks like this:
        01 00 * * * root /volume1/Family/Synology/Scripts/start_crashplan.sh
        59 07 * * * root /volume1/Family/Synology/Scripts/stop_crashplan.sh

        Thanks for your help !!

      4. HvB's avatarHvB

        I’ve tried to get this running, but have been unsuccessful so far. Would it be possible to add all the required files and scripts to the package, so that the people who are challenged by Linux will only have to edit the cron file to get cronjobs for Crashplan working?

      5. HvB's avatarHvB

        I really think this would be a perfectly workable solution for 80% of the people who currently do not install your package because they want their hd’s to sleep when not used.

        After that the package is perfect to me and I’ll be using it as a second way to create off-site backups for the office too. I’ll make sure we make a donation to you then, just to show some hard earned appreciation.

      6. patters's avatarpatters Post author

        I hadn’t looked into this until now, but there’s nothing for me to do – cron in already on the system. You just need to put your own entries into /etc/crontab. Ignore what other commenters may have posted – the commands to start and stop CrashPlan should be as follows (as I have already posted in these comments):
        /var/packages/CrashPlan/scripts/start-stop-status start
        /var/packages/CrashPlan/scripts/start-stop-status stop

        EDIT – Ah, hang on… there are some Package Center specific variables that aren’t set when launching like this. I’ll have to write some kind of method of reading them in from a file if they’re null.

      7. patters's avatarpatters Post author

        New version 015 released with support for starting from cron. See the notes section for info. When started via cron, CrashPlan will correctly show as running in Package Center though it may take 30 seconds or so to poll the status.

    2. Michael's avatarMichael

      Thank you both for you comments. After a recent reboot, CP completely stopped working again, and could not be started, either using DSM, CLI root or CLI crashplan (using Pauls’ suggestions). Error log just said stopping.

      I have since uninstalled CP and java, reinstalled, used the same process I had stated before, and it is running again. I suppose the solution is to never reboot! Obviously there is something going on, but I haven’t the experience nor the time to figure out it… :(

      Hopefully patters updated install will fix the issue whenever that comes out…

      Reply
  27. Svein A Kristiansen (@svar)'s avatarSvein A Kristiansen (@svar)

    Hi :)
    I found an problem here.
    If I stop and start the Crashplan Proe package, it forgets the settings.
    I have to log in again, and adopt the client again.
    I bet the same thing happens if I restarts the Syno.

    Before on the old version this just woked, something wrong in the package now or I doing something wrong?

    Reply
    1. patters's avatarpatters Post author

      Are you stopping and starting it from Package Center? If you start doing it on the command line running as root you can run into these kinds of problems. Also DSM 4.1 isn’t deleting the user home folder when users are deleted, so maybe there’s an old settings file stuck there with the wrong permissions. Try stopping, then running this and restarting:
      chown -R crashplanproe /volume1/@appstore/CrashPlanPROe
      chown -R crashplanproe /var/services/homes/crashplanproe

      Reply
      1. Svein A Kristiansen (@svar)'s avatarSvein A Kristiansen (@svar)

        yes, I stopped and started in the Package Center.
        Tried to stop and run the lines from you and started again and now it remember the settings. I have to test later too, because then it happened, I could not manage to connect to the old backup sets so now its taking new backup of everything even if the old backups are on disc and it backups to the same destination.

        Its not that scary for me, because this is kinda an test environment, but Im soon setting this up for an customer at work and then I need to know how it works :)

        But why did it happened? Is it some old config files that have been from the earlier version under DSM 4.0 ?
        I guess no one can give an good answer to this.
        I yust hope I dont have to type the chowns lines everytime I need an restart or anything.

        Maybe it wil work more nice on an new Syno with 4.1 from the start.

      2. patters's avatarpatters Post author

        The reason is that DSM 4.1 changed the behaviour of the synouser tool. Now when you delete an account it doesn’t remove its home directory. So I’ll have to create a new version of each package. I can’t do that for about a week or so at least I’m afraid.

  28. jwon's avatarjwon

    I just want to thank you so much for this package! I have a DS412+ and I installed Java 6 using your guide and the crashplan syno package, and everything works great!

    Thank you so much for making this process simple! I hope you plan to keep on supporting this package!

    Reply
  29. Chris's avatarChris

    Something in the latest package (or Crashplan?) is resetting the my.service.xml serviceName value to 127.0.0.1 every time the package starts. Reinstalled a few times with no change.

    I also see some confusion about the difference between ports 4242 and 4243, so it might be nice having that clarified in the docs.

    Reply
    1. xris's avatarxris

      Reading things here more closely, I’ve noticed some other comments from people having similar issues with the Crashplan Desktop client (maybe not Pro/e). I have successfully managed to get things working with the port forwarding technique described at (http://support.crashplan.com/doku.php/how_to/configure_a_headless_client), and the chown of /var/services/… mentioned by Svein, but the serviceName value in my.service.xml file is definitely getting overwritten whenever the client starts (I don’t even see the start-stop-status script where the Package Wiki suggests it should be, so it appears there’s something else to deal with in DSM 4.1 besides the homedir issue).

      Reply
      1. ivom74's avatarivom74

        Same problem here, any one een solution? I copied my.service.xml to standaard.service.xml. This works until you restarts the service and crashplan losthis configs.

  30. ivom74's avatarivom74

    Version: Crashplan 3.2-013
    nasserver: Synology ds212j firmware 4.1 2635
    Problem: Can’t connect to UI from other pc to Crashplan.
    Using netstat the problem is that port 4243 is listering to 127.0.0.1 instead of 0.0.0.0.

    I can’t change the my.service.xml. After restarting the crashplan service it’s overwritten again?

    — some thinks i tryed but you must forget, lets crashplan work until it restarts: —
    I’m not so good in linux but i tryed something out. I overwrite the default.service.xml with the changed my.service.xml (with 0.0.0.0 in it).
    That works a littel bit. After restarting the service I can connect to Crashplan. Even remote backups works great.
    I even can restart the remote with not losing my configs. But when I stop en restart the crashplan service using the graphical interface on the Synology my configs is back to old. Even copying back the config directory don’t work. I got some new ID for Crashplan which sees it as a new computer.
    —————————————————————————————————————

    I actually have one simple question which will let it work. How can I change the ip from 127.0.0.1 to 0.0.0.0 for port 4243 ?

    Reply
    1. ivom74's avatarivom74

      That’s a fast update, how do I install it?
      I removed the package and installed new one. Current version 3.2-14
      my.services.xml says still 127.0.0.1 as servicehost. I changed it to 0.0.0.0 but it’s still changing back after restarting de crashplan service.

      Reply
    2. Peter W's avatarPeter W

      Thanks again for all the work you have done !

      I have installed the new version, uninstalled it then reinstalled it.

      i have just changed the ui.properties to servicePort=4200 and tried to connect from my MAC to the synology :

      ssh -L 4200:localhost:4243 admin@192.168.1.100

      However, i keep getting this error message in the terminal console :

      DiskStation> channel 3: open failed: administratively prohibited: open failed
      channel 3: open failed: administratively prohibited: open failed
      channel 3: open failed: administratively prohibited: open failed
      channel 3: open failed: administratively prohibited: open failed
      channel 3: open failed: administratively prohibited: open failed

      it it finally fails to connect.

      Would you know if there is any modification i could do to make it work ?

      Thanks again for the packages !

      Reply
      1. patters's avatarpatters Post author

        Don’t use that method, it’s unnecessarily cumbersome and only really intended if you’re managing from a remote location (i.e. you’re not at home).
        Undo whatever changes you made, then just edit ui.properties on the client (your Mac) to set serviceHost to your NAS IP. Remember that you have to stop and restart the CrashPlan package on NAS at least once following a clean install, otherwise you won’t be able to connect.

  31. HvB's avatarHvB

    Hi Patters, thanks for your hard work. Any word on the continuous polling issue yet, apart from Code42 not willing to offer you any help?

    Reply
    1. patters's avatarpatters Post author

      One of their support guys did look into it a bit further but couldn’t figure out a reason. I was able to test that the continuous polling of default.service.xml doesn’t occur on Intel – it’s only on ARM. And the problem on ARM is that we’re using all sorts of unsupported libraries to get it working.

      Reply
    2. HvB's avatarHvB

      Hi Patters, after the 2636 firmware wouldn’t let the disks in my new 1812+ go to sleep anyway, I now have the 2647 which does allow for sleeping drives again. You say the Crashplan package should allow an Intel-based syno to go to sleep, but I’m afraid to say it doesn’t on mine. I’ve tried to reproduce it several times and it’s always when CP is switched on that the disks won’t spin down. Is there something else for me to adjust or do so that it will work?

      Reply
      1. patters's avatarpatters Post author

        Maybe it’s stopping the hibernation for some other reason. You can enable the logging of all filesystem activity to /var/log/messages using this:
        syno_hibernate_debug_tool --enable 1

        Leave it for a while then:
        cat /var/log/messages

        To disable the debugging:
        syno_hibernate_debug_tool --disable

        I had briefly tried this on Intel and I don’t remember seeing anything unusual (unlike on ARM).

  32. ivom74's avatarivom74

    It works finally, don’t ask how and why.
    step 1
    Changed my.services.xml serverhost to 0.0.0.0
    step 2
    copy my.service.xml to default.services.xml
    step3

    chcwn -R crashplan /volume1/@appstore/CrashPlan
    chown -R crashplan /var/services/homes/crashplan

    And at last ofcourse let Crashplan in windows point to the ipadress of Synology.
    I can now restart synology nas and keep the configuration…

    Reply
    1. patters's avatarpatters Post author

      Every time you edit these files you potentially make it worse because you’re not doing it as the daemon user so they end up with the wrong permissions. Uninstall again (so this time the updated uninstall process will run) the re-install and then it should be ok.

      Reply
      1. ivom74's avatarivom74

        I did this but then the service.xml contains 127.0.0.1 instead of the 0.0.0.0.
        I first uninstalled the software, even rebooted the nas and then installed again.

        I don’t have to edit these files anymore and can reboot with no problems the nasserver. It will give some trouble with a crashplan update I think.

  33. Sven's avatarSven

    I get problems with CrashPlanPROe 3.3-014 and 015 about “out of memory”.
    The message in the history.log.0 of the CrashPlanEngine shows:
    oomStack=java.lang.OutOfMemoryError: Java heap space
    or
    OutOfMemoryError occurred…RESTARTING! message=OutOfMemoryError in BackupQueue!

    It is a 1512+ (4.0-2198) with 1GB of memory and about 2.1TB of files to backup (99% already done).
    I changed the “run.conf” in /volume1/@appstore/CrashPlanPROe/bin to 768MB.

    Now it seems to be running. I think adding cheap RAM to the Syno should be done (if applicable), CrashPlan needs it.

    Reply
    1. patters's avatarpatters Post author

      Absolutely, this is why I recommend buying an Intel syno if you have more than 100GB of data that you need to backup. The ARM ones are stuck with what little RAM is soldered on the system board. That you got this error with package version 014 and 015 is incidental. As I mentioned in the notes, 2TB is approximately the point at which you need to grow the heap. 3GB is the max RAM for most of the Intel models – the additional 2GB DIMM will only cost around 15USD or so:
      http://forum.synology.com/wiki/index.php/User_Reported_Compatible_RAM_modules

      Reply
      1. Sven's avatarSven

        Hi Patters,
        on a 1511+ facing the same problem (3.3-015) I face the problem that my setting in the run.conf is always reset to 512MB, if the CrashPlanService on the server is restarted through the Web-GUI..

      2. patters's avatarpatters Post author

        See the Notes section. You need to edit the heap size in /volume1/@appstore/CrashPlan/syno_package.vars
        My start-stop-status script for the package will attempt to define a sensible heap size automatically each startup up to a max of 512MB, unless you update the value in syno_package.vars

      3. Eric's avatarEric

        Patters – I have just bought a DS413j which has 512MB of RAM and an ARM CPU. Are you suggesting that I am unlikely to be able to use Crashplan to back up more than around 100GB? If so that would be disappointing to say the least…

        Does this limit apply to individual back up sets, or the combined total of all back up sets?

        Thanks

        Eric

      4. patters's avatarpatters Post author

        512MB physical RAM should allow you to back up between approximately 1 and 2 TB but I think it all depends on how small the files are. From what I have seen big few bigger files means a lower RAM requirement. Millions of tiny documents seems to require more.

      5. Eric's avatarEric

        Thanks for that update and for all your hard work in getting this working. I have about 500GB backing up right now. What is the kind of performance impact that I would see if I exceeded your recommended limits? Is it speed of backup or would it impact performance and usability of the NAS? If the former then that might be OK, but not if it is the latter.

        Also, when talking about maximum capacity of files are your referring to the total stored on the NAS or just the capacity of the files included in the backup? So for instance, would I be OK storing 2TB of movie files on the NAS provided I did not include them in the Crashplan backups?

        Thanks again.

        Eric

      6. Eric's avatarEric

        Thanks again. If it is only the data that is to be backed up that needs to be taken into account then does that mean that your limit applies to individual backup sets or to the total of all back up sets? EG if I am to back up 3TB of data but this comprises 3 separate back up sets of 1TB each, would I be OK (as these are less than the 1-2TB you mention and only one set can run at any given time) or is it the 3TB that is the problem?

        And BTW, my upload speeds are very slow (around 250kpbs) even to a friends PC where previously I was getting over 1mbps. Could this be linked to the NAS capabilities?

  34. Brett's avatarBrett

    Hi All,

    Thanks for the package, it’s awesome. Got a more generic crashplan question though. Why is it so slow to upload? I have 15 Mbps upload, and it’s only using 3 Mbps on average. I’ve set it up to use 90% CPU, but it never seems to use more than 20%?

    Any ideas?

    Thanks

    Reply
    1. Marcus's avatarMarcus

      Mine usually is around 2 Mbps. Rarely i’ve seen it to use 6 Mbps, but haven’t seen that in a while. Sometimes it is slowing down to a crawl and the only thing which seems to get it up to speed again is to reboot the diskstation box. I just wished I could keep it at 6 Mbps for a while to get the initial upload done…

      Reply
    2. patters's avatarpatters Post author

      I think the constraint is more likely to be at the CrashPlan.com end. They have a lot of customers, after all. I see similar performance to that on a work connection which is 100Mb uncontended.

      Reply
  35. tU's avatartU

    newbie question: is it possible to use these Crashplan syno packages to do a backup from NAS1 ->NAS2, and also reciprocal, from NAS2 -> NAS 1?

    I don’t want to backup content from my PCs -> NAS using Crashplan. I already have that angle covered and am very specific about what content is saved onto my NAS – most of my Windows profiles I am HAPPY to lose ;)

    But if I could use this for doing NAS1NAS2 over the WAN that would be AWESOME.
    I would donate for sure

    I can’t see how it would be done as the Crashplan on the Syno is headless, so I can’t drive it as being the backup “source”.
    Or have I missed something fundamental about how it works?

    Those curious about why I want to do this, well, I’ve been looking for a good way of using 2x Synology boxes to do customisable sync backups of each other over the internet.
    The official Synology supported methods are seriously lacking when it comes time to do a restore eg you can only restore an ENTIRE SHARE from Network Backup, not single-file which is just a nonsense.
    Yes, I have looked into manual scripts using rsync, rdiff-backup etc, but I’d really like an intuitive interface for a tech-luddite user who wants to restore just one file…

    Reply
    1. patters's avatarpatters Post author

      Yes you can do this. Use the backup to a friend feature and you just input the ID code from one machine on the other. Works over WAN too with apparently no need for port forwarding, and it should work reciprocally.

      Reply
      1. Brett's avatarBrett

        You don’t even need to do that. If the crashplans are logged into the same account and the computers are set to enable inbound backups, they will appear as options under “My Computers” which seemingly seems to work like Friends, but without having to share a PIN.

  36. Dirac's avatarDirac

    Hi Patters, great work on this package. Appreciate anyone who could help me figure out this issue.

    I’m having an issue where every time the CrashPlan package is upgraded (most recently with v.16 but also with v.15) my external USB folder backup destination gives the error “Destination unavailable — backup location is not accessible”. The CrashPlan Central and Friends destinations work fine. It seems to be a permissions or user issue because I can run “chmod -R 777 /volumeUSB1/(guid)/” and the folder backup starts working again. Is something happening to the crashplan user when the package is updated that would cause it not to have access to that local folder after the update?

    Incidentally, after this happened in v.15 I completely uninstalled and reinstalled the package. I was able to adopt the CrashPlan Central and Friends backups, but I still could not access the USB drive folder.

    Thanks!

    Reply
    1. patters's avatarpatters Post author

      It’s down to how the package upgrade works. In fact it removes and re-installs (but I preserve certain key files before that happens). The user account is destroyed and recreated so it gets a new UID – I can’t change that behaviour. So it’s a hazzard of upgrading I’m afraid.

      Reply
  37. Stanislav's avatarStanislav

    Hi patters, sorry I don’t get it. After update to 4.1 CP does not work anymore. Everytime I connect with my client to the headless I see about 1 min the interface and always it says “Analyzing: volume…” but it does not move forward and the client disconnects after 1 min with the message “CP has disconnected from the backup engine”

    I am running now your newest package and have de-installed JAVA 6 and re-installed. Also I completely removed your package from the Synology station 1512+ and re-installed it and adopted the old backup. But nothing works. I only connect to the CP machine via serviceHost=IP address.

    Do you have any idea what the problem could be?

    Many thanks in advance!

    Reply
    1. patters's avatarpatters Post author

      Look at the log. If it keeps stopping and starting every minute then that’s a sign that you need to increase the heap size – see the notes section of the blog post.

      Reply
      1. Stanislav's avatarStanislav

        I increased it to my max size 3.072 MB (since I have 3 GB) and… it works!!!

        You…are…my…hero!

      2. patters's avatarpatters Post author

        No problem, and thanks for the donation! I seem to remember that if you look at engine_output.log or maybe engine_error.log when it does that it mentions an out of memory error, so it’s not a total mystery. You probably don’t need 3GB. Even with 1GB you should comfortably be able to handle backups of around 2-3TB. There was a Windows Server I had installed CrashPlan on at work and I had about 4TB synced with a heap of 1,536MB.

  38. Jeremy's avatarJeremy

    Is User Home strictly necessary for this package? I am not eager to allow User Home – I have a very specific set of allowed users, and have manually created home folders for them.

    Reply
    1. patters's avatarpatters Post author

      The packages do check for this I’m afraid, because by default DSM creates all users with a home folder of /root which they then lack permissions to modify. My daemon users each need specific .profile settings so having them share a home directory is not workable. I’m not entirely sure how the User Homes Service works, but I would imagine it only affects the user creation process once enabled so it may be workable for you to enable it only while you install the package then disable it immediately afterwards. Does the CrashPlan user keep the home directory of /var/services/homes/crashplan?

      Reply
  39. Nogger's avatarNogger

    Hi Pattern,

    I just saw your update 016…does it mean that you package should work on DS213+!

    regards
    Nogger

    Reply
  40. arondeparon's avatararondeparon

    I installed Crashplan Pro on my Syno DS212J but file scan for aprox. 500GB is extremely slow. It takes several days for the scan to complete. Is this normal? I already have set CPU usage to 80%, but the performance is terrible.

    Reply
    1. patters's avatarpatters Post author

      That’s a lot of data to be backing up for a system with such a small amount of RAM, so I’m not sure there’s much you can do to improve things.

      Reply
  41. Dirac's avatarDirac

    Having trouble getting a DS710+ to hibernate. I’ve read the hibernation posts here and from what I understand, the “hibernation bug” shouldn’t affect the Intel NASs. I’ve narrowed it down to the CrashPlan package (the NAS hibernates properly with the package stopped and no other changes). Looking at the hibernate debug log shows no activity–in fact in one case there was no log entry at all for over 6 hours (hibernate is set to 10 minutes). I logged out of DSM and turned off all connected PCs to make sure nothing else was interfering.

    Is there any other way I can get an idea what is keeping the NAS awake, and if it is CrashPlan, how it is doing so? From all my troubleshooting it appears to be this package, but I don’t know another way to get any more information than that. I appreciate anyone’s suggestions!

    Reply
    1. Dirac's avatarDirac

      patters–I saw on your Java page that you don’t have an Intel NAS, so if there’s anything I can do or provide to help troubleshoot this issue further (if you like), please let me know.

      Reply
  42. Adam's avatarAdam

    Patters,

    Thank you for all your hard work getting CrashPlan to run on the Synology platform! Your work was one of the reasons I opted for Synology over some of the other NAS platforms available.

    I have a DS213 running DSM 4.1-2647. I downloaded JRE7 for ARM 5 per this document and placed it in the newly created public folder. Then, I installed the Java SE for Embedded 7 package from your repo. Finally, I installed the regular CrashPlan package from your repo. After the CrashPlan package was installed, I stopped the CrashPlan and then started it again via Package Center per the instructions. Java SE is showing Stopped status, but I gather that isn’t a problem. CrashPlan is showing Running status.

    When I use the netstat command via the CLI, I can see that my DS is listening on ports 4242 and 4243. When I do a /volume1/@appstore/CrashPlan/bin/CrashPlanEngine status, it reports back that the CrashPlan Engine is running.

    However, when I try to tunnel to the box via an SSH connection in Putty, I get the error that the CrashPlan software can’t connect to the backup engine. I have a Linux box with CrashPlan installed that I also connect to via this same method and it works fine. I edited the ui.properties file to uncomment out the line with ServiceHost and I changed the IP address to the IP address of the NAS and bypassed tunneling via SSH in Putty altogether and I still got the same error.

    I am running the version 3.2.1 of the CrashPlan desktop application, if that matters.

    Any idea as to what I might be doing incorrectly? Getting CrashPlan working on the DS is the last major thing I need to accomplish in terms of getting the DS configured. Thanks in advance for your help!

    Reply
      1. Adam's avatarAdam

        Yep, I did. I stopped and restarted the CrashPlan package right after installation. When I stop the package and then use the CLI to check the status of the CrashPlan Engine, it confirms that it is stopped. When I restart the package via Package Manager and then check the status again via the CLI, it shows that it is running with a new PID. For good measure, I restarted the package one or two more times and rebooted the DS entirely, as well.

        I can telnet to the DS via its LAN IP on port 4242 and I see a line that says:
        com.code42.messaging.security.SecurityProviderReadyMessage

        When I telnet to the DS via its LAN IP on port 4243, it establishes the connection without showing any additional text.

        So, clearly the ports are open and the CrashPlan engine is running. I’m just puzzled as to why I can’t connect via the CrashPlan desktop software either via SSH tunneling or by completely bypassing the SSH tunneling and uncommenting out the “ServiceHost” line of ui.properties and changing the IP from 127.0.0.1 to the LAN IP of the DS. It seems like this should be working.

        Do you think it could be a Java incompatibility problem with the version of DSM I’m running (4.1-2647)? I’m assuming if that were the case, though, the CrashPlan Engine wouldn’t start at all.

        Something I find odd is that if I check the status of the CrashPlan engine via CLI, it says it’s running. However, if I check the Processes tab under Resource Monitor in DSM, there is no mention of “CrashPlanEngine” at all. Is that normal?

      2. Adam's avatarAdam

        Well, I found my problem! I was SSHing into the DS with the admin username. Tried SSHing in as root instead and then when I launched CrashPlan on the desktop, I was prompted to enter my CrashPlan credentials!

  43. Kenneth's avatarKenneth

    Hi. im trying to use your package on my DS409slim – so far i works ok – i have installed the java and the crashplan on my DS.

    But i cant figure out – will the crashplan run if no computer is on?

    Reply
      1. Kenneth's avatarKenneth

        Thanks for fast reply :o)

        I have done as you describe in your notes:
        #serviceHost=127.0.0.1
        is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
        serviceHost=192.168.1.210

        Do i need to do more?

        And now the Synology apears as this computer – but it seems that it only scans the synology when my computer is on.

        I have been trying to follow this link – http://support.crashplan.com/doku.php/how_to/configure_a_headless_client
        But do i need to use ssh? – im Absolutly new into ssh and telnet.

        At this moment i cant get crashplan to start properly up again on my windows 7 machine – i will try to reinstall.

        many thanks for your help.

  44. Nogger's avatarNogger

    Hi Pattern,

    tomorrow I will receive my DS213+ and I will try to install your package, but I am not sure which Java-package from Oracle I have to use: “Power Architecture Linux – Headless – e600 core” or “Power Architecture Linux – Headless – e500v2 core”. Do you know, which one I should use?

    Thanks and regards
    Nogger

    Reply
  45. Duane's avatarDuane

    Hello Pattern,
    Great product overall… I have a DS412+ and all was well. after the initial install I noticed an update button in the package manager and updated the crashplan package. Eventually the java package also showed an update button. When I updated the java package, runing the crashplan service consumed 62% of my RAM. I am at 100% backed up so there is not much going on. I removed Java and reinstalled the jre 1.6.0_34-012 but it still eats massive memory. BTW it used to only eat about 20-25%

    Reply
  46. plonk's avatarplonk

    I have a DS412+ with Intel Atom D2701 processor. I am running DSM 4.1-2647. I installed ejre-1_6_0_34-fcs-b04-linux-i586-headless-19_july_2012,gz using the Synology package you are providing via your repo. The package claims to be installed correctly. I installed the CrashPlan PRO package, also via your repo. The service is running per the Synology interface.

    I logged into the NAS via SSH and tried to telnet to localhost on port 4243 but I am not getting the string of characters that the CrashPlan wiki article suggests I should see when trying to connect. (http://support.crashplan.com/doku.php/how_to/configure_a_headless_client). I am also not getting that response from my Windows machine when having the Putty tunnel up and running. The response is the same, whether from the remote station or locally from the NAS. All I see is a prompt but no strings.

    When I stop CrashPlan PRO on the NAS, using telnet I also get the same response – that a connection could not be made.

    It appears that the CrashPlan PRO package is not running right.

    Do you have any troubleshooting suggestions? I.e. is there a test to validate that Java is correctly installed and responding? Same for CrashPlan PRO?

    Any assistance would be greatly appreciated.

    Reply
    1. plonk's avatarplonk

      Disregard. I managed to get it working. Instead of using an SSH tunnel as suggested by the Synology and CrashPlan wikis, I am using your suggestion and modified the ui.properties file to directly talk to the server. This worked very nice and my NAS is now backing up.

      Reply

Leave a reply to Marten Cancel reply