CrashPlan packages for Synology NAS

UPDATE – CrashPlan For Home (green branding) was retired by Code 42 Software on 22/08/2017. See migration notes below to find out how to transfer to CrashPlan for Small Business on Synology at the special discounted rate.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient, particularly against the threat of ransomware.

There are now only two product versions:

  • Small Business: CrashPlan PRO (blue branding). Unlimited cloud backup subscription, $10 per device per month. Reporting via Admin Console. No peer-to-peer backups
  • Enterprise: CrashPlan PROe (black branding). Cloud backup subscription typically billed by storage usage, also available from third parties.

The instructions and notes on this page apply to both versions of the Synology package.

CrashPlanPRO-Windows

CrashPlan is a Java application which can be difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created several others. It has been through many versions since that time, as the changelog below shows. Although it used to work on Synology products with ARM and PowerPC CPUs, it unfortunately became Intel-only in October 2016 due to Code 42 Software adding a reliance on some proprietary libraries.

Licence compliance is another challenge – Code 42’s EULA prohibits redistribution. I had to make the Synology package use the regular CrashPlan for Linux download (after the end user agrees to the Code 42 EULA). I then had to write my own script to extract this archive and mimic the Code 42 installer behaviour, but without the interactive prompts of the original.

 

Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    Community-packages
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package, and an Intel CPU is required.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 8 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • To install CrashPlan PRO you will first need to log into the Admin Console and download the Linux App from the App Download section and also place this in the ‘public’ shared folder on your NAS.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java or CrashPlan PRO app files recognised by this package, try downloading them with Firefox. It seems to be the only web browser that doesn’t try to uncompress the files, or rename them without warning. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This will configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
 

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • The Linux CrashPlan PRO client must be downloaded from the Admin Console and placed in the ‘public’ folder on your NAS in order to successfully install the Synology package.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. Run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding (0.0.0.0 means the server is listening for connections on all interfaces) e.g.:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,0.0.0.0
    port,--------------token-----------------,binding

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address.
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,192.168.1.100
    assuming that the Synology NAS has the IP 192.168.1.100
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following files:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
    /var/lib/crashplan/.ui_info (Synology) – this value does change spontaneously if there’s a port conflict e.g. you started two versions of the package concurrently (CrashPlan and CrashPlan PRO)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items

 

Migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO)

  • Leave the regular green branded CrashPlan 4.8.3 Synology package installed.
  • Go through the online migration using the link in the email notification you received from Code 42 on 22/08/2017. This seems to trigger the CrashPlan client to begin an update to 4.9 which will fail. It will also migrate your account onto a CrashPlan PRO server. The web page is likely to stall on the Migrating step, but no matter. The process is meant to take you to the store but it seems to be quite flakey. If you see the store page with a $0.00 amount in the basket, this has correctly referred you for the introductory offer. Apparently the $9.99 price thereafter shown on that screen is a mistake and the correct price of $2.50 is shown on a later screen in the process I think. Enter your credit card details and check out if you can. If not, continue.
  • Log into the CrashPlan PRO Admin Console as per these instructions, and download the CrashPlan PRO 4.9 client for Linux, and the 4.9 client for your remote console computer. Ignore the red message in the bottom left of the Admin Console about registering, and do not sign up for the free trial. Preferably use Firefox for the Linux version download – most of the other web browsers will try to unpack the .tgz archive, which you do not want to happen.
  • Configure the CrashPlan PRO 4.9 client on your computer to connect to your Syno as per the usual instructions on this blog post.
  • Put the downloaded Linux CrashPlan PRO 4.9 client .tgz file in the ‘public’ shared folder on your NAS. The package will no longer download this automatically as it did in previous versions.
  • From the Community section of DSM Package Center, install the CrashPlan PRO 4.9 package concurrently with your existing CrashPlan 4.8.3 Syno package.
  • This will stop the CrashPlan package and automatically import its configuration. Notice that it will also backup your old CrashPlan .identity file and leave it in the ‘public’ shared folder, just in case something goes wrong.
  • Start the CrashPlan PRO Synology package, and connect your CrashPlan PRO console from your computer.
  • You should see your protected folders as usual. At first mine reported something like “insufficient device licences”, but the next time I started up it changed to “subscription expired”.
  • Uninstall the CrashPlan 4.8.3 Synology package, this is no longer required.
  • At this point if the store referral didn’t work in the second step, you need to sign into the Admin Console. While signed in, navigate to this link which I was given by Code 42 support. If it works, you should see a store page with some blue font text and a $0.00 basket value. If it didn’t work you will get bounced to the Consumer Next Steps webpage: “Important Changes to CrashPlan for Home” – the one with the video of the CEO explaining the situation. I had to do this a few times before it worked. Once the store referral link worked and I had confirmed my payment details my CrashPlan PRO client immediately started working. Enjoy!
 

Notes

  • The package uses the intact CrashPlan installer directly from Code 42 Software, following acceptance of its EULA. I am complying with the directive that no one redistributes it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
    /var/packages/CrashPlan/target/log/engine_output.log
    /var/packages/CrashPlan/target/log/engine_error.log
    /var/packages/CrashPlan/target/log/app.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_PATH="http://download2.code42.com/installs/linux/install/${SYNOPKG_PKGNAME}"
CP_EXTRACTED_FOLDER="crashplan-install"
OLD_JNA_NEEDED="false"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.*_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  CP_EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
  OLD_JNA_NEEDED="true"
  [ "${WIZARD_VER_483}" == "true" ] && { CPPROE_VER="4.8.3"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER="3.6.1.4"
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
  DOWNLOAD_FILE="CrashPlanPROe_${CPPROE_VER}_Linux.tgz"
fi
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "rtd1296" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tar.xz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-old-${SYNO_CPU_ARCH}.tar.xz"   
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
[ "${OLD_JNA_NEEDED}" == "true" ] && INSTALL_FILES="${INSTALL_FILES} ${OLD_JNA_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  DOWNLOAD_URL="http://tinyurl.com/javaembed"
  EXTRACTED_FOLDER="ejdk1.8.0_151"
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BINARY="ejdk-8u151-linux-armv6-vfp-hflt.tar.gz"
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BINARY="ejdk-8u6-fcs-b23-linux-ppc-e500v2-12_jun_2014.tar.gz"
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
    EXTRACTED_FOLDER="ejdk1.8.0_06"
    DOWNLOAD_URL="http://tinyurl.com/java8ppc"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BINARY="ejdk-8u151-linux-i586.tar.gz"
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BINARY="jre-8u151-linux-x64.tar.gz"
    JAVA_BUILD="Linux x64"
    EXTRACTED_FOLDER="jre1.8.0_151"
    DOWNLOAD_URL="http://tinyurl.com/java8x64"
  fi
fi
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile


pre_checks ()
{
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    )

    JAVA_BINARY_FOUND=
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.gz ] && JAVA_BINARY_FOUND=true
     
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi
  else
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1
      fi
    fi
  fi
}


preinst ()
{
  pre_checks
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    cd ${PUBLIC_FOLDER}
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      #Chrome
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      #Safari
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    fi
    mv ${EXTRACTED_FOLDER} ${SYNOPKG_PKGDEST}/jre-syno
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  fi
   
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}
  
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  #are we transitioning an existing CrashPlan account to CrashPlan For Small Business?
  if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ]; then
    if [ -e /var/packages/CrashPlan/scripts/start-stop-status ]; then
      /var/packages/CrashPlan/scripts/start-stop-status stop
      cp /var/lib/crashplan/.identity ${PUBLIC_FOLDER}/crashplan-identity.bak
      cp -R /var/packages/CrashPlan/target/conf/ ${OPTDIR}/
    fi  
  fi

  exit 0
}


preuninst ()
{
  `dirname $0`/stop-start-status stop

  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  [ -e ${OPTDIR}/lib/libffi.so.5 ] && rm ${OPTDIR}/lib/libffi.so.5

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

 exit 0
}


preupgrade ()
{
  `dirname $0`/stop-start-status stop
  pre_checks
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk


TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
ENGINE_CFG="run.conf"
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
OPTDIR="${PKG_FOLDER}/target"
PID_FILE="${OPTDIR}/${DNAME}.pid"
DLOG="${OPTDIR}/log/history.log.0"
CFG_PARAM="SRV_JAVA_OPTS"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
FULL_CP="${OPTDIR}/lib/com.backup42.desktop.jar:${OPTDIR}/lang"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile


start_daemon ()
{
  #check persistent variables from syno_package.vars
  USR_MAX_HEAP=0
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  fi
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/
  fi

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to start-stop-status.sh from installer.sh because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
    fi
  done

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/restartLinux.sh

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the chrisnelson.ca method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
 
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0
  fi

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/libffi.so.5 ] && cp -f ${OPTDIR}/bin/libffi.so.5 ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/libjtux.so ] && cp -f ${OPTDIR}/bin/libjtux.so ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/
  fi

  #create or repair libffi.so.5 symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/libffi.so.5 ]; then
    if [ ! -e /lib/libffi.so.5 ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
      ln -s ${OPTDIR}/lib/libffi.so.5 /lib/libffi.so.5
    fi
  fi

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80
  elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192
  elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384
  elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512
  elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024
  fi
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
    JAVA_MAX_HEAP=${USR_MAX_HEAP}
  fi   
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
    $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
  fi
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
  fi

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
     
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars
    fi

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Djava.net.preferIPv4Stack=true\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
   else
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
  fi

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    fi
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

  else
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars
  fi

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    fi
  else
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1
  fi
}

stop_daemon ()
{
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/
}

daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}

wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

install_uifile & upgrade_uifile

[
  {
    "step_title": "Client Version Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
          {
            "key": "WIZARD_VER_483",
            "desc": "4.8.3",
            "defaultValue": true
          },          {
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_3614",
            "desc": "3.6.1.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
          }
        ]
      }
    ]
  },
  {
    "step_title": "Java Runtime Environment Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
          {
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
          },
          {
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true
          }
        ]
      }
    ]
  }
]
 

Changelog:

  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0047 30/Oct/17 – Updated dedicated Java version to 8 update 151, added support for additional Intel CPUs in x18 Synology products.
  • 0046 26/Aug/17 – Updated to CrashPlan PRO 4.9, added support for migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO). Please read the Migration section on this page for instructions.
  • 0045 02/Aug/17 – Updated to CrashPlan 4.8.3, updated dedicated Java version to 8 update 144
  • 0044 21/Jan/17 – Updated dedicated Java version to 8 update 121
  • 0043 07/Jan/17 – Updated dedicated Java version to 8 update 111, added support for Intel Broadwell and Grantley CPUs
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library libc42archive.so
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of start-stop-status.sh
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the chrisnelson.ca method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required
    libjnidispatch.so taken from Debian JNA 3.2.7 package with dependency on newer libffi.so.6 (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release
 
 
Advertisements

6,672 thoughts on “CrashPlan packages for Synology NAS

  1. Horst

    Since a few days the CrashplanPRO (CPSB) reports downloading a new version but 4.9.0047 is already installed. The upgrading process fails and the service stops. What’s wrong any tip is highly appreciated! Thanks, Horst

    Reply
  2. David

    Does anyone know when these changes take effect, as my Crashplan Pro service seems to still be working fine. (Eg. Discontinuation of headless support)

    Reply
    1. CrashOver1D

      Already “active” since several days for me since the app keep trying to update to 1506661200660_4347 version and fail, don’t know what to do now :(

      Reply
    2. SJL

      Looks like these changes have happened. I no longer have the familiar desktop program/application. I can no longer access the Synology headless CrashPlan.

      Reply
  3. Al

    My headless client has been offline for about 6 days now and I don’t know what happened. Does anyone have a work around? I can install reinstall the package and it will run but it appears to be pulling down an update and after such time it stops working. Any pointers would be appreciated

    Reply
  4. Paul

    It looks as though there’s been an update to the 4.9.0 software which has automatically downloaded to my NAS, and now it crashes. If I re-load and login from my client it says CrashPlan PRO failed to apply an upgrade and will try automatically in one hour.

    Reply
  5. Al L (@mghtymeatshield)

    My package auto-updated today to version 6 where they discontinued headless operation. Patters’ package no longer works on my Synology as a result. I should have blocked the auto-upgrade process. But thankfully my Synology can run Docker and I managed to get jlesage’s Crashplan-Pro docker image to work.

    Reply
  6. X

    Anybody else has issues with the CrashPlan PRO upgrade – I tried unpacking it and updating the lib folder to no effect…

    Reply
  7. troyTroy

    My CrashPlan PRO for Small Business just stopped working. I looked at the logs and sure enough they tried to push an update. So I uninstalled and reinstalled but made sure that So I went back to the trusted way we used to block updates by doing the three commands we used for the past few years.

    mv upgrade upgrade.tmp
    touch upgrade
    chmod 444 upgrade

    I launched the client side and it asked me to login with my account and then after authenticating properly now shows a Red Exclamation mark in a bubble and says “Upgrading CrashPlan PRO”. Clearly the upgrade is being blocked so it just sits there and won’t go any further.

    It seems we need another way of blocking updates OR every time they decide to push an update we will need Patters to put a new version for us to upgrade to.

    Thoughts?

    Reply
    1. runt

      The problem is, the new version (6.6.0) does not support the method we’ve been using to manage it. It requires an actual GUI on the box. And eventually, the last version that supported remote management will stop working.

      Reply
      1. ccanzone

        Hello Everyone,

        I decided to move to Docker. It was really easy.

        Install Docker package in Package Center
        SSH on Synology
        docker search crashplan
        docker pull jlesage/crashplan-pro
        mkdir -p /volume1/docker/appdata/crashplan-pro (this dir will store the settings, not your data)
        docker run -d \
        –name=crashplan-pro \
        -p 5800:5800 \
        -p 5900:5900 \
        -v /docker/appdata/crashplan-pro:/config:rw \
        -v volume1:/volume1:ro \
        -v volume2:/volume2:ro \
        -v volume3:/volume3:ro \
        -v volume4:/volume4:ro \
        jlesage/crashplan-pro

        (My data is spread around 4 volumes)

        Then access the CrashPlan Pro console through the address: http://myserver:5800 and follow the instructions from https://github.com/jlesage/docker-crashplan-pro#taking-over-existing-backup

        I spent no more than 15 minutes to have it up and running. Took some time to take over, but did not re-upload a single bit.

        Cheers!
        Canzone

      2. ed

        @ccanzone those were close, but not quite right for me.

        You: -v /docker/appdata/crashplan-pro:/config:rw
        Me: -v /volume1/docker/appdata/crashplan-pro:/config:rw
        Yours gives error at run that /docker/- does not exist

        You: -v volume1:/volume1:ro
        Me: -v /volume1:/volume1:ro
        Yours creates empty mount named volume1 with nothing in it. This was apparent from ash shell in the container. Mine I can see all contents of /volume1 in the ash container.

        I haven’t completed the deal yet by importing the backup from crashplan. One curiosity I have is your mounting volume as “ro”. Thinking CP requires write access to a volume. Oh well, maybe I’ll just smoke it with ro, see how it goes.

    2. Evan

      How are you GUI connecting to the old 4.9 version on the NAS? I now have 6.6 on my Windows machines and can’t manage to get that version to connect to 4.9 on the NAS.

      Reply
    3. Paul

      I would like to try this – can you point me in the direction of where I run these commands? Do I need to ssh to my disk station, and if so do I need to run them from a particular directory?

      Many thanks

      Reply
    4. georgekalogeris

      Where do I give these commands?

      mv upgrade upgrade.tmp
      touch upgrade
      chmod 444 upgrade

      if I give them in Putty I get mv: cannot stat ‘upgrade’: No such file or directory

      Reply
      1. Harley

        You need to apply them while in the CrashplanPRO target folder where the Upgrade folder is visible when you do a DIR command.

      2. craig1001

        I guess I’m just going by my own requirements which is less than 1TB for which I’m paying about US$12 but that is of course unlimited. Might have a crack at the Docker solution, though it looks complicated to get right. I wouldn’t miss the dogs breakfast of the invoices Digital River send to UK customers though. Price in US$, VAT in EUR to a customer whose currency is GBP! If they can convert currency……. ?

      3. Thor Egil Leirtrø

        And you need root access to do it. The sudo command will elevate you.
        I ran these commands, but the problem is still that CP stops after an hour when it attempts updating again.
        I then have to start it twice to get it running again. Not a very practical solution.

      4. Harley

        I’m no Linux expert by any means.

        But this works for me. I was stopping every 1 hour with the Upgrade trying to reoccur and then it stopped again.

        After running the commands listed above the Synology CrashPlanPRO is staying at 4.9 and working.

        My interpretation (and anyone else jump in) –
        mv upgrade upgrade.tmp – this command takes away the upgrade folder and renames it *.tmp
        touch upgrade – touch creates an upgrade entry in the directory
        chmod 444 upgrade – this command protects owner/group/others are all only able to read the file. They cannot write to it or execute it

        After this executed the Synology no longer tries to Upgrade since it doesn’t exist anymore.

      5. georgekalogeris

        ⦁ sudo mv /var/packages/CrashPlanPRO/target/upgrade /var/packages/CrashPlanPRO/target/upgrade.tmp
        ⦁ sudo touch /var/packages/CrashPlanPRO/target/upgrade
        ⦁ sudo chmod 444 /var/packages/CrashPlanPRO/target/upgrade

    5. patters Post author

      My installation hasn’t had the upgrade pushed to it yet. I’ve changed the upgrade folder permissions to read only on both my syno and my Mac’s CP PRO client folder. Hopefully that will keep it running to the bitter end.

      Reply
  8. Gaskit

    Mine has stopped too. I thought we were supposed to be able to carry on with headless set up using the old version. Is there any way to turn off CP’s auto-update on Patters’ package?

    As an aside, I think all users of Patters package should be letting CP know how unhappy we are. Having been pushed from CP Home to CP Small Business with a commitment the service would continue, they have now removed functionality which looks like it screws us. :-(

    Reply
  9. troyTroy

    Actually if you just hit enter and try your username password again, it will launch CrashPlan Pro and if you block the updates like I mentioned above it will continue to work even though every hour it will try and update and complain that it cannot update the application. We still need a new version of CrashPlan Pro by Patters with the latest software.

    Reply
    1. ed

      Thanks troy. Don’t recall having to do that lock down update folder before on Home edition, but it works like a champ. CP is running headless again on my Nas and windows client connects fine.

      Reply
  10. fatboyw

    My CrashPlanPro stopped working just like everyone else. It has problem doing the update but it could still run. The problem is that I could not connect CrahsPlan UI client on PC to the NAS anymore despite cross checking the .ui_info files being correct. I don’t know whether it was working properly.

    I have now stopped CrashPlanPro on the NAS and turned off auto-renewal. I was going to do it closer to the end of the month but since it no longer works, I turned it off now. I am now fully on Amazon Drive via Cloudsync with encryption.

    Thanks again to patters for all his work in the last few years.

    Reply
    1. ed

      Agree, patters has kept this community alive and thriving. Of course, I’ve just held my breath because isn’t any redundancy there should he throw in the towel, or worse. I’m very grateful.

      Reply
  11. craig1001

    Before I go hunting for a new backup solution, is this the actual end of CPPro running on a headless NAS then or is it possible that our hero can still get us out of the mire? It seems like Code42 are trying very hard to get rid of our business.

    Reply
    1. LVP78

      For me it was 2.4 days ago according to CrashPlan. I just started an Amazon Drive subscription and upload rates max out my FiOS bandwidth. So far, so good.

      Reply
    2. patters Post author

      It’s pretty bizarre behaviour. They could easily have accommodated running on NAS, and simply written some decent detection for it and had a different pricing plan if it was causing them economic pain. Instead they’ve chosen to break the functioning of their product steadily for years and are now actively removing features. Ah well, I guess we should stop rewarding bad behaviour with our wallets.

      The whole “lets poison the well of our customer base by killing off the Home product” is as crazy a decision as Microsoft’s recent efforts (or lack of) in the mobile/wearables/Skype/VR spaces.

      Reply
  12. jamestx10

    If your NAS supports Virtual manager from Synology then use that to create an Ubuntu VM and run CrashPlan from there. That is what I am using now, running the latest version of CP.

    Reply
  13. Archibald

    Hi.
    I have a DS710+ on which I have run CrashPlan Home for the past 4 years. Today I have ordered a new synology NAS (DS918+) and I would like to merge the crashplan archive of the 710+ to the 918+. How do I do that? Do I first need to upgrade to Pro version on the 710+ (to get my reduced price offer)? And after that: how do I make sure the new NAS (918+) will work with the account of the old NAS (710+), including the 1 year reduced price offer of crashplan?

    Or do you advice to move away from crashplan anyways given the continued upgrade issues.

    Thx for advice.

    Reply
  14. Hal

    Okay that seems to be working. Thanks!

    Interestingly, the log says it will try in an hour but, it retries every half hour.

    hal

    quoting:
    georgekalogeris
    December 16, 2017 at 11:39
    ⦁ sudo mv /var/packages/CrashPlanPRO/target/upgrade /var/packages/CrashPlanPRO/target/upgrade.tmp
    ⦁ sudo touch /var/packages/CrashPlanPRO/target/upgrade
    ⦁ sudo chmod 444 /var/packages/CrashPlanPRO/target/upgrade

    Reply
  15. DirkM

    I’ve been attempting to use the above solutions to get my NAS backup restarted. Renaming the “upgrade” directory works as advertised, but when I try to use the Windows CrashPlan to configure, it has automatically upgraded to 6.6. I uninstalled it and reinstalled 4.9 (I thought) and 6.6 comes up again. Have y’all had this problem?

    I’ve looked at the website and my NAS seems to be connected, but if for some reason I need to restart it on the NAS, is the 6.6 client going to work? My guess would be no.

    I am ready to jump ship, but my subscription is paid up until Sept 2018. I guess it’s worth the $60 I’m losing to go somewhere else and quit wasting time with this.

    Reply
  16. ed

    Even though my backup seems running fine after preventing upgrade, crashplan email alerts are being generated that says the backup hasn’t run in N days. I know this isn’t true, since it just backed up some files, but does anyone else have same behavior due to this tactical solution?

    Reply
    1. Nick

      I’ve been using crashplan in a Docker container for months now that has a vnc client. It’s bden working very well but I haven’t moved to Pro yet.

      Reply
    2. Mike

      Same here Ed. CPP says in email there is no backup in N days but locally it appears to be uploading to the cloud. Guess a restore attempt is in order to see what’s what.

      Reply
      1. ed

        My backup is working Mike. I had a similar issue to what you are describing I think when upgrading to Pro, it started to sync with remote store and I think I hadn’t uninstalled Home and eventually Home version may have started and screwed up the syncing process. Least that’s what I think happened. While CP backup report suggested I had 5 TB in their cloud, I wasn’t able to see any files to restore. I uninstalled Home, and forced a resync for SB Pro and all has been well ever since.

  17. Troy

    Same here. CrashPlanPro says no backup for 7.7 days but everything is sync’d and last backup completed an hour ago. I’m sure it’s tied to the new version and the fact that we’re blocking it. The only person who can save us is Patters at this point.

    Reply
  18. gaskit

    It looks like they have updated the Release Notes for V6.6.0 and deleted the bit that said “….. previous versions of the CrashPlan for Small Business app would still function in [headless] configurations.” Now it just says headless is unsupported.
    As a stop-gap, I have a linux box that runs MythTV so I might mount the Syno on it and try the CP-SB app from there but I’d much rather have something run directly from the Diskstation. These are not happy times :-(

    Reply
  19. georgekalogeris

    please Code42 get out and say: “Synology users, you HAVE to leave”
    so we ‘re not hoping anymore…
    It’s very stressy

    Reply
    1. Paul

      This seems to be the solution for me – I’ve successfully installed the docker package and am able to access the v6 software via the browser – no headless solution required. Phew!

      Reply
      1. Nick

        It’s actually a lot easier. You just need to map the drives to the Docker container. It’s how I back up now

      2. Bagu

        Sorry, but I do not understand how it works. While the patters package contains very clear and precise instructions.

      3. Nick

        A docker is a self contained wrapper of the app. Takes about 5 mins to set up on the Synology. You only need to map the drives to the Docker. That’s it it runs itself and you don’t need to faff around with changing config files for the connection

      1. Bagu

        Here is the command line wich seem to work :
        docker run -d –name Crashplan -p 5800:5800 -p 5900:5900 -e USER_ID=0 -e GROUP_ID=0 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1:/volume1:ro jlesage/crashplan-pro

        Please help us to adapt it if it’s wrong somewhere…

      2. Slidermike

        If someone who has successfully implemented the docker/crashplan would be kind enough to provide a step by step for the rest of us who have little to no docker experience that would be awesome.
        Thank you in advance.

      3. runt

        I tried it from the GUI. I’m to tired to dig into the CLI much today. I have been mounting the shares from the two Synology NASes we have in our office on a Ubuntu machine so I think I will keep doing that for now.

      4. Bagu

        The command line i provide is to use on a ssh shell. I really understand why you are tired. There is no step by step with no brain guide to make it work like patters package do.
        I think i success to make it work, if so, i’ll write a guide in french and english.
        But as i said before, i hope there is no error in my command line.

      5. runt

        Its not just the CrashPlan stuff that makes me tired. Kids and other stuff in my non-work life add to it, as does stuff in my work life.

      6. Dezmond

        Does this mean you need to run it manually from the shell every time?
        docker run -d –name Crashplan -p 5800:5800 -p 5900:5900 -e USER_ID=0 -e GROUP_ID=0 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1:/volume1:ro jlesage/crashplan-pro

        Or is there a way to modify the docker registry, image or container, to add the mapping for /volume1 ?

      7. Alexander Lew

        @Dezmond: Once you run the command line, you can start and stop the container through the Synology Docker interface. You won’t need to use the command line again unless you remove the container entirely from Docker.

  20. Jeroen Boonen

    Don’t know if relevant, but after removing my crashplan from the package center and migrating to crashplanpro and running it from docker (rungeict/crashplanpro:latest), everything is still running fine for some weeks now.
    Will this method be affected too?

    Reply
  21. ilkevinli

    1. Download the Docker app in Package Center
    2. Load the Docker App
    3. Click on Registry and search for Crashplan
    3. Install jlesage/crashplan-pro
    4. Wait until you receive a message that the download is complete (499MB)
    5. Login to the Synology NAS (ex. Putty) as root or an admin user. If you login as an admin user, run the command sudo su
    6. Paste the below command. This will create the Container in the docker app and set all of the settings. This command is the easiest way to make everything work, but it gives root access to the container. There are other ways to use a user other than root, but this is meant for the easiest setup.
    5. Load any browser and type http://:5800
    6. Follow the directions for “Taking Over Existing Backup” from this link:
    https://hub.docker.com/r/jlesage/crashplan-pro/

    **Please make a donation for the user that created the docker container.

    Docker command:

    docker run -d –name Crashplan -p 5800:5800 -p 5900:5900 -e USER_ID=0 -e GROUP_ID=0 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1:/volume1:rw jlesage/crashplan-pro

    Reply
    1. ilkevinli

      Since I can’t edit my posts. I forgot one thing. You need to create the folders appdata and crashplan-pro in the docker folder using file station BEFORE running the command. OR just use a folder you have already created that you don’t mind the docker container storing files in.

      Also WordPress cut of the IP in step 5. It is the IP of your NAS plus :5800

      Reply
    2. Jon Etkins

      Do NOT simply cut and paste that Docker command – it will not work as-is. The site that hosts this blog is converting two dashes to a single “em-dash” character, which Docker will barf on. After pasting, you need to replace the single long dash in “—name” and replace it with two normal dash characters.

      Reply
    3. Richard

      Did somebody find a way to monitor the running backups?

      The home version had a taskbar icon (on your local machine) that provided the status and current upload speed and action from the headless instance (NAS).

      Is there any way to check this status info also on the docker instance?

      Reply
      1. Thor Egil Leirtrø

        The local web UI displays the current activity – click the Home icon and then Details. You will see if any backup sets are currently running a backup. No speed info though – but you have the DSM Resource Monitor widget for that info.

    4. Rodrigo

      Does it work with CPH ? I’m more than pissed with Crashplan and will move to Glacier, which is already uploading my 4TB of data.. But until there I need protection. Considering Docker on DS916+.
      And will stuff Crashplan hard drives until the very end. Damn it.

      Reply
  22. Mark

    Hello,
    I just installed the CrashPlan Pro package yesterday.
    Now the error message appears: “Failed to run the package service”
    Do you have the problem as well? Is this the problem that you have discussed here?
    Or is that another problem?

    Reply
  23. Paul

    1. Install docker as a package on your DiskStation if you don’t have it already installed.
    2. In Diskstation, load docker, go to Registry and search for crashplan-pro and download it.
    3. Go to File Station and create a new folder: docker/crashplan.
    4. In Control Panel | Terminal & SNMP make sure ssh is enabled.
    5. On your Mac / PC, open a new ssh / putty session to your DiskStation
    6. Type ‘sudo su’ and enter password
    7. Type ‘docker run -d –name=Crashplan-PRO -p 5800:5800 -v /volume1/docker/crashplan:/config:rw -v /volume1:/volume1:ro jlesage/crashplan-pro’
    8. In diskstation, open Docker, go to Containers and check that Crashplan appears and is running
    9. On your Mac / PC, navigate to http://:5800 – it should load and ask you to login to crashplan.
    10. Follow the instructions in this link for taking over an existing backup: https://hub.docker.com/r/jlesage/crashplan-pro/

    The above assumes that your DiskStation volume is named Volume 1. In Crashplan, the folder you need to select for upload is root/volume1/

    Reply
    1. thezfunk

      I get docker installed. I get jlesage package installed but when it comes to the rest to get the image up and running I fail. I get an error…

      docker: Error response from daemon: Bind mount failed: ‘/volume1/docker/crashplan’ does not exists.

      The folder does indeed exist. I am not sure what is wrong since I have never used docker before.

      Reply
  24. craig1001

    I’ve been giving Code42 and CrashPlan both barrels (in a considered way) on Twitter (@code42) and Facebook (@CrashPlanPro). So far they haven’t answered the main question as to WHY they’re preventing headless operation. Feel free to join in and ramp up the pressure. In the meantime, while CPP isn’t working, I’ve been giving ElephantDrive a go. The have a Synology app. Seems to do most things that CPP did for a similar price. Only downside I can see is the max file size of 15GB so can’t back up my dev VMs.

    Reply
      1. craig1001

        Been too busy to get anywhere with it just yet (run up to Xmas etc.). Might try the Docker approach before jumping ship.

    1. John

      I switched to ElephantDrive at the beginning of this year. I can tell you it is expensive and restoring is not as easy as Crashplan. I just wish Code42 just support Headless already.

      Reply
  25. Hal

    so, I made the changes to “upgrade” which prevents Crashplan from upgrading to 6.6. If I log in to my Crashplan web interface i can see that the files are being uploaded. Nonetheless, Crashplan is indicating that I haven’t done a backup in 2 days (about the time i started blocking the upgrade).

    Reply
  26. Slartybart

    I’ve tried to install the docker system and it seemed to do something. For those that want to try, I installed Docker through the Package centre, then ssh’d into the synology, typed sudo su, which gives root access, then ran the instructions referenced above. It did install and run something like Crashplandocker:latest, but it also seemed to install other things which were trying to run too. After not being able to log into it using the http://MYIPADDRESS:5880, and being suspicious of the other things it was trying to do I abandoned and deleted the whole lot. The good news though, in common with other reports here, the patters system, although in a constant loop of trying to upgrade and failing does seem to be backing up. I am getting the email warnings as well, but the client side app assures me the backups are taking place.

    Reply
    1. Monte

      I think I’ve had it trying to fix this so many times. I’m buying a NEW SYNOLOGY NAS and making my own satellite backup location. Maybe store at a friends house after I upload/transfer my current NAS info to it. Then just use the Synology software to transfer and backup from there. Tired of worrying about it. Thanks Patters for providing a way to use this. It’s obvious they don’t want to support this and don’t want us using their product. Cheers!!!

      Reply
      1. georgekalogeris

        cheers.
        After many vendors abandoning their “Unlimited” storage Like Google, Amazon, Crashplan
        I am also giving up.
        Off-site backup from office to home and vice-versa. It’s free
        We were paying for nothing

  27. Al L (@mghtymeatshield)

    As mentioned by others, if your Synology can run Docker, I would highly recommend using the Docker image for CrashPlan Pro by JLeSage:
    https://github.com/jlesage/docker-crashplan-pro

    The way I got it working on my Synology was as follows:
    1) Stop Patters’ CrashPlan app.
    2) Install Docker onto Synology.
    3) From the Docker Interface, download the Jlesage image for CrashPlan Pro.
    4) Enable SSH access on your Synology.
    5) Log in to your Synology using an SSH client like PuTTY.
    6) Enter the following command at a minimum to get it operational (there are other variables you can use to set the Java Heap Size, Secure the VNC access, and set your local timezone):
    sudo docker run -d –name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -p 5900:5900 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro
    7) The sudo command will prompt you to enter your password, so enter it and let the process run.
    8) Disconnect from your SSH session and disable SSH access on your Synology.
    9) You should now see the container running in Docker.
    10) Log into the web GUI by pointing a browser window to http://%5Blocal IP address of synology]:5800
    11) Sign into your CrashPlan account through the web GUI you just accessed.
    12) Click the “Replace Existing” button to start the wizard.
    13) Skip “Step 2 – File Transfer”.
    14) Once done with the wizard, go to your device’s details and click Manage Files. Since we mapped volume1 to volume1, the Docker CrashPlan should automatically see the files/folders you were backing up from Patters’ CrashPlan.
    15) Perform a backup and the system will make sure its the same but shouldn’t require re-uploading everything.
    16) You can close the browser window and you’re done! As long as the Docker Container is up and running, you’ll be backing up to CrashPlan. Any changes to your backup set like adding new folders to your backup will be done through the web GUI.

    A few notes:
    1) This works differently from Patters’ application because by using Docker, the program uses a light version of Linux that from CrashPlan’s viewpoint is an actual Linux install.
    2) -e USER_ID=0 -e GROUP_ID=0 essentially runs it as root.
    3) -e CRASHPLAN_SRV_MAX_MEM=[enter value here] sets the Java Heap Size.
    4) -e SECURE_CONNECTION=1 makes the web page for the GUI use HTTPS.
    5) -e VNC_PASSWORD=[enter a password here] makes it so that you need to input that password into the webpage in order to access the GUI.

    Reply
    1. Alexander Lew

      Forgot to include that before Step 6 you need to manually create the folders /docker/appdata/crashplan-pro
      on Volume1 (or whichever volume you choose, just make sure you change the command line appropriately).

      Reply
    2. ccanzone

      Hi,
      After changing the heap size, it stopped working. I got this in the log file (/volume1/docker/appdata/crashplan-pro/log/engine_output.log)
      “Initial heap size set to a larger value than the maximum heap size”

      I read all over the web, tried tons of tweaks and nothing helped. Had to delete everything and start from scratch, this time not changing the original 1024m (1Gb) heap size.

      My backup is around 4Tb and I had to increase the heap size on Synology’s CrashPlan package. I know that, at some point in the near future, I’ll face the issue again. If someone figures out how to change the heap size in Docker, please comment.

      Thanks,
      Canzone

      Reply
      1. zerox20

        I fought some of the heap size issues initially too. I decided to split all of my backups up into smaller segments which helped me initially in the early stages of patters crashplan, so I just left it like that and I have yet to run into issues w/ the 1024GB limit. Granted I have plenty of RAM in my synology. Initially was a PITA but overall it made crashplan work much nicer.

    3. Jo1911

      Hi Al L (@mghtymeatshield)

      First I wish a merry X-mas to all.
      Thanks for your help : with your post my docker installation of CPSB is successfull.
      Now I can access the volume1 and all my NAS.
      CPSB is a good and cheap option to backup my 4To NAS DS713+

      ++

      J.

      Reply
  28. John

    Hi, when I ran this: “docker run -d –name=crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -p 5800:5800 -p 5900:5900 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro”

    I get “docker: invalid reference format.”

    Reply
    1. Jon Etkins

      I got the same thing. Turns out the correct format for the name parameter is “–name crashplan-pro” (without the quotes), not “–name=crashplan-pro”. It seems that someone’s web browser “helpfully” converted two dashes into an em-dash character, which Docker barfs on.

      Reply
      1. Jon Etkins

        Huh, it must be WordPress converting those dashes, because I typed in dash-dash and it as converted them to an em-dash. The “name” parameter should be prefixed by two dashes, not a single dash or an em-dash. You’ll need to replace the dash manually if you cut and paste from here.

  29. John

    BTW, I got it to started without running the command line. I still get the same error when I tried with command line.

    Reply
    1. Jon Etkins

      If you start it through the GUI, you won’t be able to specify the /volume1 mapping, because the GUI won’t let you define volume mappings at that level.

      Actually, that’s not *completely* true – you have to start it from the CLI the first time in order to define that mapping, but having done that, you can subsequently control it from the GUI like any other container.

      And yes, I have it working and I can connect to the Crashplan GUI at http://%5Bmy NAS’s LAN IP address]:5800, though I haven’t migrated my CP subscription yet so I have not actually logged in to CP. If you are unable to connect, then you might take a look at the virtual console to see if there’s any glaring error messages that might help. Select the container, click Details, then click the Terminal tab. Also check the Port Settings on the Overview tab to make sure ports 5800 and 5900 are mapped correctly.

      Reply
      1. Thor Egil Leirtrø

        I’m unable to mount /volume1 even from the command line. I get no error messge – it is just not listed as a mounted volume from the GUI afterwards. Any hints?

        My command line:
        sudo docker run -d –name crashplan-pro -e USER_ID=0 -e GROUP_ID=0 -e CRASHPLAN_SRV_MAX_MEM=6144M -p 5800:5800 -p 5900:5900 -v /volume1/docker/appdata/crashplan-pro:/config:rw -v /volume1/:/volume1:ro jlesage/crashplan-pro

      2. Jon Etkins

        “I’m unable to mount /volume1 even from the command line. I get no error message – it is just not listed as a mounted volume from the GUI afterwards.”

        Have you actually tried accessing /volume1 from the Docker guest OS? I suspect that the GUI is simply refusing to – or incapable of – accepting or displaying volume mappings at that level. Mine doesn’t show the /volume1 mapping in the GUI either, but if I export the settings and examine the exported .json file with a regular text editor, I see that it is included, and if I run an “ls /volume1” command I see the contents just fine, so I know it is being mapped. (For those that don’t know, you can run commands within the guest container by opening the Details window, selecting the Terminal tab, clicking the Create dropdown arrow and selecting “Launch with command”, and entering the desired command.)

      3. Thor Egil Leirtrø

        @Jon Etkins – No, I wasn’t aware of that – I’m pretty new to docker. But I tried now and it seems you are right.I can list the content in the command window.

        I’ll try to set up everything properly tomorrow. My patters Crashplan decided today was a fine day to synchronize block information – I’ll just have to wait it out.

      4. Thor Egil Leirtrø

        Now running Synchronizing block information in the docker instance. So far so good.
        It took me a few seconds to figure out how to get my crypto key pasted into the config (hint – there is that Clipboard button in the upper right corner of the screen).

    1. Jon Etkins

      Did you type the command by hand as I suggested? Cut-and-paste is going to give you an error. And in case you missed it, there is NO = sign in “- -name crashplan-pro” (nor is there a space between the two -‘s – I just added one here to prevent WordPress munging them into an em-dash.

      Reply
  30. DirkM

    Trying the docker method. I get “Bind mount failed: ‘/volume1/docker/appdata/crashplan-pro’ does not exist.” There are no folders under the /volume1/docker folder except for @eaDir. I downloaded the latest jlesage image so I don’t know where it’s putting them.

    Reply
    1. SJL

      On 2017-12-19 ccanzone described the necessary step. You need to create the appdata directory and appdata/crashplan-pro directory. With root authority:
      mkdir -p /volume1/docker/appdata
      mkdir -p /volume1/docker/appdata/crashplan-pro

      Reply
      1. Jon Etkins

        Actually you only need that second command:
        mkdir -p /volume1/docker/appdata/crashplan-pro
        because the -p parameter tells it to create any intermediate directories necessary, so it will create both the appdata and crashplan-pro directories in one hit.

  31. zerox20

    Can we remove the old pkg then from synology or do we have to leave it stopped? I assume Java has to stay installed too.

    Reply
  32. Kev

    I cannot for the life of me get this working, so frustrating, just keeps returning invalid reference format.
    Help please

    Reply
  33. tomwhelan

    I’ve installed the LeSage docker instance. The docker container is running, I can see /volume1 mounted, and the ports aappear properly mapped, but when I access the CrashPlan page at http://mySyno:5800, it says “Code42 cannot connect to its background service”.

    In Docker, the Terminal window for crashplan-pro says “[CrashPlanEngine] starting …” over and over.
    Thor mentioned entering a key somewhere – not clear to me where you do that.

    Reply
      1. Thor Egil Leirtrø

        OK. It seems like the problem is solved.I found a @docker directory inside my volume1, and after I removed that one from the backup set everything went back to normal.
        Anyone with any idea what this directory is doing?

    1. ccanzone

      Did you use the “-e CRASHPLAN_SRV_MAX_MEM=” parameter in the docker command? If you did, check if you informed the memory unit (for example, M for megabytes). Mine, for example, is “3072M”. I was having the exact same issue because I’ve informed only “3072”.

      Reply
      1. tomwhelan

        Thanks, ccanzone! I didn’t add a unit of measure to -e CRASHPLAN_SRV_MAX_MEM when I started it. I restarted the container with -e CRASHPLAN_SRV_MAX_MEM=mem-valueM, and now I can log in to the CrashPlan UI. I’ll check the logs to see if backup is working was expected.

        I’m happy about this, because CrashPlan still looks better than the competition as a backup service. The pricing plans for Glacier and S3 are way too complicated. Hyperbackup can create versioned backup to different cloud storage locations, but it has issues. IDrive is inexpensive, but users report problems. So I’m happy to have a working CrashPlan installation again.
        And thanks again, patters!

  34. B.Good

    Has anybody recently had their custom encryption key disappear for their Synology NAS? I’ve used the same custom key on my DS412+ and two Windows PCs for over 2 years now. Suddenly, connecting to CrashPlan on my DS412+ is asking me to enter my custom key. When I tried to import the key I use, I was warned it would delete my backup sets, so I aborted that attempt. I then went under “Restore” to see if I could get to my files, and I’m told my encryption key is incorrect! Restore works fine for my 2 Windows PCs and in the recent past it worked fine for my Synology. It looks like this started sometime in December. I really don’t want to have to start over with 300 GB of data. Why would it just suddenly lose track of the key?

    Reply
      1. B.Good

        Mine seems to think the key isn’t correct, but it’s the same one I used on all 3 of my computers an d it works fine on the other 2. All 3 worked fine until mid December. So I’m stumped at what could have happened or what to do about it short of starting over from scratch!

  35. X

    Wasted some good hours today trying the docker solution.
    Maybe it’s my DSM 5.2 (or some settings) but it simply didn’t mount /volume1 (it appeared but was empty).
    What worked after a lot of trial and error was to mount volume1 as both storage and volume1 (in this order) when creating the docker instance; storage is empty while volume1 now is visible.

    Hope this helps,

    P.S. The crashplan experience has been getting worse year by year; I plan to leave them once I find a better alternative. Arq looks promising but it’s not for Linux.
    P.P.S. Patters thanks a LOT for your support over the years.

    Reply
    1. David

      Hi X.

      I encourage you to push through with the docker solution. I had a few glitches getting it working myself, but everything I had trouble with was due to my misunderstanding of how docker works. Overall, seems more robust and supportable than the CP Home solution. I’m guessing that CP upgrades to it will be far easier than they were with the earlier setup.

      Of course, thanks to both patters, for making that version work, and to jlesage, for the docker setup and free support!

      (@jlesage, like others, I would welcome a donate link, which I looked for on github but didn’t find (I think they have changed their donation approach)).

      Reply
  36. Greg

    Patters package has been back working for me since few days ? Since about a week, my DS716+ has been backup without any problem/errors ? Am I the only one?

    Reply
    1. Hal

      Mine has been backing up fine since i blocked the update directory. But, who knows how long crashplan will support the 4.9 version.

      Reply
    2. cruisinforgold

      I’m also still using Patters package at 4.9 with the block on the auto upgrade process. I had to force the PC CrashplanPro client back to the 4.9 version to see GUI into the Synology/CrashplanPro process, though.

      Reply
    3. Jon Etkins

      You’re not alone – mine continues to soldier on with updates disabled. I’ve looked at the Docker solution, but I still have eight months until my CP home sub expires, so I’ll stick with it until I figure out a replacement solution for all our household PCs that currently use CP to back up to the Syno peer-to-peer.

      Reply
    4. craig1001

      I restarted mine out of curiosity a few days ago and it’s still running. Something’s happened somewhere. Can Code42 affect it remotely I wonder?

      Reply
      1. Nicholas Riley

        My CrashPlan *for home* (I’m still doing peer-to-peer backups until the bitter end) just broke a couple days ago on my Synology box. I didn’t read this entire thread (would be nice if the blog post were updated!) so I ended up uninstalling and reinstalling, but now I’m stuck in the same place:

        I 01/13/18 01:09PM CrashPlan started, version 4.8.3, GUID […]
        I 01/13/18 01:09PM Downloading a new version of CrashPlan.
        I 01/13/18 01:09PM Download of upgrade complete – version 1436674800484.
        I 01/13/18 01:09PM Installing upgrade – version 1436674800484
        I 01/13/18 01:10PM Upgrade installed – version 1436674800484
        I 01/13/18 01:11PM CrashPlan stopped, version 4.8.3, GUID […]
        I 01/13/18 01:11PM Synology extracting upgrade from /var/packages/CrashPlan/target/upgrade/1436674800484_15.jar

        Restarting Service
        Could not find JAR file /volume2/@appstore/CrashPlan/bin/../lib/com.backup42.desktop.jar
        Service Restart Complete

        Is there any way to get back a working CrashPlan installation now?

      2. Nicholas Riley

        Sorry to respond to my own question — but the key was to not start the package immediately after installation, but to replace the ‘upgrade’ directory with a file as described above. I now get “CrashPlan Upgrade Failed” in the console but it seems to be resuming my backup. Hopefully this will last me until I can replace this with something else (why after so many years is there still nothing equivalent to CrashPlan, anyway!).

  37. Phil

    I just reinstalled the package with version 4.9 and then disabled the upgrades. Seems to be working now.

    Reply
  38. mmaandag

    Here as well – it was out for some days, but then started to work again. Seems that another update has fixed it?

    Reply
  39. Daniel

    I’ve tried docker (jlesage) and it worked fine for a few days, but then I realised that all my photos from 2017 have not been backed up. Then it discovered 2000 new files and trashed my synology.

    Yesterday it made the whole system really slow and in the end came up with an error saying it cannot connect to background service. I restarted the docker container and went to bed.

    When I came back from work today I found all hard drives trashing in the synology, lights going mental, and the system is unresponsive. I mean I can’t even log in via SSH anymore! I’m trying to shut it down, but the blue light has been flashing for the past 15 minutes and it still hasn’t shut down. I plugged out the network cables and the only things left are usb to the UPS device and the power cable.

    I’m afraid this thing will permanently damage my synology, and I don’t like that I can’t even access my system. The container was set with a RAM limit and a CPU limit of “low” so I don’t understand how it can cause so much damage, but it managed. I still don’t know if I can get my Synology back.

    Reply
    1. Daniel

      After about 30 minutes the Synology finally powered off and I could get it back online, kill all docker processes quickly, stop the crashplan pro container, delete all images and uninstall docker app.

      My Synology diskstation is back at 1% CPU and 30% RAM usage. Hard drives are no longer trashing around.

      Maybe it’s ok for people with 8GB of RAM, but for my system with 2GB of RAM it’s scary. This never happened with Patters’ package. I’ve cancelled my subscription, or at least tried to, it’s impossible to cancel on the crashplan site, but luckily the payment method was my paypal account which allows me to cancel recurring payments.

      It was nice while it lasted, thanks Patters for all the good years, pity that Code42 made such a mess of their own product and worked so hard to get rid of their customers.

      Reply
  40. Mark

    Thanks to all for your advice WRT Docker. Had to add an additional volume (I back up locally to USB as well as to the cloud) but it works perfectly!

    For the record, my server stayed on 4.x and kept backing up to its destinations, but I couldn’t connect to it from the client anymore.

    Many thanks to patters for his excellent work and prompt reaction time to the many changes thrown at us by Code42 over the years.

    Reply
  41. Jack

    What should I do going forward?

    Hi

    I’m currently on patters CPH v4.8.0-0042 with updates blocked and my CPH expires in a few days (hey, I’ve been busy ;-) ) and I’m wondering what I should do now.

    Will I be able to convert patters to CPP (in other words, can I convert and install a CPP version that still allows headless operation) or is that option gone now?

    If I can install a version where headless runs, should I update to patters CPH v4.8.3-0047 and convert to CPP or just convert straight away?

    I will convert to the Docker solution (or maybe to a Virtual Manager solution but probably Docker). Assuming Docker, should I just convert to Docker for CPH and migrate that to CPP or if I can update patters CPH to patters CPP headless should I do that first and then migrate straight to docker CPP?

    Reply
  42. Wayne

    Got the Docker version of CrashPlan Pro running! It really wasn’t as bad as I thought it was going to be. If you are on CrashPlan Home, you must convert your account first before you even touch your NAS. My pre-paid CP Home account is good through 01/04/2019…so I was hesitant to migrate, but went with it…and it continues through that date with a 75% discount for the following year into 2020. At that point in time, I’ll check to see what is new in cloud backup solutions for our NAS’s!

    Also, if you are getting the error “docker: invalid reference format.”, it means you probably cut and paste the main docker run statement. DO NOT CUT AND PASTE! You must manually type in the entire statement and use two dashes in front of name and replace the equal sign with a space instead.

    Patters, thanks for your help over the years on getting CP to work on Synology to begin with!

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s