CrashPlan packages for Synology NAS

UPDATE – CrashPlan For Home (green branding) was retired by Code 42 Software on 22/08/2017. See migration notes below to find out how to transfer to CrashPlan for Small Business on Synology at the special discounted rate.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient, particularly against the threat of ransomware.

There are now only two product versions:

  • Small Business: CrashPlan PRO (blue branding). Unlimited cloud backup subscription, $10 per device per month. Reporting via Admin Console. No peer-to-peer backups
  • Enterprise: CrashPlan PROe (black branding). Cloud backup subscription typically billed by storage usage, also available from third parties.

The instructions and notes on this page apply to both versions of the Synology package.

CrashPlanPRO-Windows

CrashPlan is a Java application which can be difficult to install on a NAS. Way back in January 2012 I decided to simplify it into a Synology package, since I had already created several others. It has been through many versions since that time, as the changelog below shows. Although it used to work on Synology products with ARM and PowerPC CPUs, it unfortunately became Intel-only in October 2016 due to Code 42 Software adding a reliance on some proprietary libraries.

Licence compliance is another challenge – Code 42’s EULA prohibits redistribution. I had to make the Synology package use the regular CrashPlan for Linux download (after the end user agrees to the Code 42 EULA). I then had to write my own script to extract this archive and mimic the Code 42 installer behaviour, but without the interactive prompts of the original.

 

Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Now browse the Community section in Package Center to install CrashPlan:
    Community-packages
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see CrashPlan in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package, and an Intel CPU is required.
  • Since CrashPlan is a Java application, it needs a Java Runtime Environment (JRE) to function. It is recommended that you select to have the package install a dedicated Java 8 runtime. For licensing reasons I cannot include Java with this package, so you will need to agree to the licence terms and download it yourself from Oracle’s website. The package expects to find this .tar.gz file in a shared folder called ‘public’. If you go ahead and try to install the package without it, the error message will indicate precisely which Java file you need for your system type, and it will provide a TinyURL link to the appropriate Oracle download page.
  • To install CrashPlan PRO you will first need to log into the Admin Console and download the Linux App from the App Download section and also place this in the ‘public’ shared folder on your NAS.
  • If you have a multi-bay NAS, use the Shared Folder control panel to create the shared folder called public (it must be all lower case). On single bay models this is created by default. Assign it with Read/Write privileges for everyone.
  • If you have trouble getting the Java or CrashPlan PRO app files recognised by this package, try downloading them with Firefox. It seems to be the only web browser that doesn’t try to uncompress the files, or rename them without warning. I also suggest that you leave the Java file and the public folder present once you have installed the package, so that you won’t need to fetch this again to install future updates to the CrashPlan package.
  • CrashPlan is installed in headless mode – backup engine only. This will configured by a desktop client, but operates independently of it.
  • The first time you start the CrashPlan package you will need to stop it and restart it before you can connect the client. This is because a config file that is only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
 

CrashPlan Client Installation

  • Once the CrashPlan engine is running on the NAS, you can manage it by installing CrashPlan on another computer, and by configuring it to connect to the NAS instance of the CrashPlan Engine.
  • Make sure that you install the version of the CrashPlan client that matches the version running on the NAS. If the NAS version gets upgraded later, you will need to update your client computer too.
  • The Linux CrashPlan PRO client must be downloaded from the Admin Console and placed in the ‘public’ folder on your NAS in order to successfully install the Synology package.
  • By default the client is configured to connect to the CrashPlan engine running on the local computer. Run this command on your NAS from an SSH session:
    echo `cat /var/lib/crashplan/.ui_info`
    Note those are backticks not quotes. This will give you a port number (4243), followed by an authentication token, followed by the IP binding (0.0.0.0 means the server is listening for connections on all interfaces) e.g.:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,0.0.0.0
    port,--------------token-----------------,binding

    Copy this token value and use this value to replace the token in the equivalent config file on the computer that you would like to run the CrashPlan client on – located here:
    C:\ProgramData\CrashPlan\.ui_info (Windows)
    “/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/.ui_info” (Mac OS X installed for single user)
    /var/lib/crashplan/.ui_info (Linux)
    You will not be able to connect the client unless the client token matches on the NAS token. On the client you also need to amend the IP address value after the token to match the Synology NAS IP address.
    so using the example above, your computer’s CrashPlan client config file would be edited to:
    4243,9ac9b642-ba26-4578-b705-124c6efc920b,192.168.1.100
    assuming that the Synology NAS has the IP 192.168.1.100
    If it still won’t connect, check that the ServicePort value is set to 4243 in the following files:
    C:\ProgramData\CrashPlan\conf\ui_(username).properties (Windows)
    “/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for all users)
    “~/Library/Application Support/CrashPlan/ui.properties” (Mac OS X installed for single user)
    /usr/local/crashplan/conf (Linux)
    /var/lib/crashplan/.ui_info (Synology) – this value does change spontaneously if there’s a port conflict e.g. you started two versions of the package concurrently (CrashPlan and CrashPlan PRO)
  • As a result of the nightmarish complexity of recent product changes Code42 has now published a support article with more detail on running headless systems including config file locations on all supported operating systems, and for ‘all users’ versus single user installs etc.
  • You should disable the CrashPlan service on your computer if you intend only to use the client. In Windows, open the Services section in Computer Management and stop the CrashPlan Backup Service. In the service Properties set the Startup Type to Manual. You can also disable the CrashPlan System Tray notification application by removing it from Task Manager > More Details > Start-up Tab (Windows 8/Windows 10) or the All Users Startup Start Menu folder (Windows 7).
    To accomplish the same on Mac OS X, run the following commands one by one:

    sudo launchctl unload /Library/LaunchDaemons/com.crashplan.engine.plist
    sudo mv /Library/LaunchDaemons/com.crashplan.engine.plist /Library/LaunchDaemons/com.crashplan.engine.plist.bak

    The CrashPlan menu bar application can be disabled in System Preferences > Users & Groups > Current User > Login Items

 

Migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO)

  • Leave the regular green branded CrashPlan 4.8.3 Synology package installed.
  • Go through the online migration using the link in the email notification you received from Code 42 on 22/08/2017. This seems to trigger the CrashPlan client to begin an update to 4.9 which will fail. It will also migrate your account onto a CrashPlan PRO server. The web page is likely to stall on the Migrating step, but no matter. The process is meant to take you to the store but it seems to be quite flakey. If you see the store page with a $0.00 amount in the basket, this has correctly referred you for the introductory offer. Apparently the $9.99 price thereafter shown on that screen is a mistake and the correct price of $2.50 is shown on a later screen in the process I think. Enter your credit card details and check out if you can. If not, continue.
  • Log into the CrashPlan PRO Admin Console as per these instructions, and download the CrashPlan PRO 4.9 client for Linux, and the 4.9 client for your remote console computer. Ignore the red message in the bottom left of the Admin Console about registering, and do not sign up for the free trial. Preferably use Firefox for the Linux version download – most of the other web browsers will try to unpack the .tgz archive, which you do not want to happen.
  • Configure the CrashPlan PRO 4.9 client on your computer to connect to your Syno as per the usual instructions on this blog post.
  • Put the downloaded Linux CrashPlan PRO 4.9 client .tgz file in the ‘public’ shared folder on your NAS. The package will no longer download this automatically as it did in previous versions.
  • From the Community section of DSM Package Center, install the CrashPlan PRO 4.9 package concurrently with your existing CrashPlan 4.8.3 Syno package.
  • This will stop the CrashPlan package and automatically import its configuration. Notice that it will also backup your old CrashPlan .identity file and leave it in the ‘public’ shared folder, just in case something goes wrong.
  • Start the CrashPlan PRO Synology package, and connect your CrashPlan PRO console from your computer.
  • You should see your protected folders as usual. At first mine reported something like “insufficient device licences”, but the next time I started up it changed to “subscription expired”.
  • Uninstall the CrashPlan 4.8.3 Synology package, this is no longer required.
  • At this point if the store referral didn’t work in the second step, you need to sign into the Admin Console. While signed in, navigate to this link which I was given by Code 42 support. If it works, you should see a store page with some blue font text and a $0.00 basket value. If it didn’t work you will get bounced to the Consumer Next Steps webpage: “Important Changes to CrashPlan for Home” – the one with the video of the CEO explaining the situation. I had to do this a few times before it worked. Once the store referral link worked and I had confirmed my payment details my CrashPlan PRO client immediately started working. Enjoy!
 

Notes

  • The package uses the intact CrashPlan installer directly from Code 42 Software, following acceptance of its EULA. I am complying with the directive that no one redistributes it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up large backup sets by editing /var/packages/CrashPlan/target/syno_package.vars. If you are considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the models with upgradeable RAM. Memory is very limited on the cheaper models. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server and that was the situation many years ago. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB. Many users of the package have found that they have to increase the heap size or CrashPlan will halt its activity. This can be mitigated by dividing your backup into several smaller backup sets which are scheduled to be protected at different times. Note that from package version 0041, using the dedicated JRE on a 64bit Intel NAS will allow a heap size greater than 4GB since the JRE is 64bit (requires DSM 6.0 in most cases).
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you are trying to troubleshoot an issue you will need to use an SSH session to inspect these log files:
    /var/packages/CrashPlan/target/log/engine_output.log
    /var/packages/CrashPlan/target/log/engine_error.log
    /var/packages/CrashPlan/target/log/app.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). The startup script will attempt to apply the published upgrade the next time the package is started.
  • Although CrashPlan’s activity can be scheduled within the application, in order to save RAM some users may wish to restrict running the CrashPlan engine to specific times of day using the Task Scheduler in DSM Control Panel:
    Schedule service start
    Note that regardless of real-time backup, by default CrashPlan will scan the whole backup selection for changes at 3:00am. Include this time within your Task Scheduler time window or else CrashPlan will not capture file changes which occurred while it was inactive:
    Schedule Service Start

  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, please consider donating using the PayPal button on the right of this page.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_PATH="http://download2.code42.com/installs/linux/install/${SYNOPKG_PKGNAME}"
CP_EXTRACTED_FOLDER="crashplan-install"
OLD_JNA_NEEDED="false"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_4.8.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_4.*_Linux.tgz"
if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ]; then
  CP_EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
  OLD_JNA_NEEDED="true"
  [ "${WIZARD_VER_483}" == "true" ] && { CPPROE_VER="4.8.3"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_480}" == "true" ] && { CPPROE_VER="4.8.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_470}" == "true" ] && { CPPROE_VER="4.7.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_460}" == "true" ] && { CPPROE_VER="4.6.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_452}" == "true" ] && { CPPROE_VER="4.5.2"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_450}" == "true" ] && { CPPROE_VER="4.5.0"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_441}" == "true" ] && { CPPROE_VER="4.4.1"; CP_EXTRACTED_FOLDER="crashplan-install"; OLD_JNA_NEEDED="false"; }
  [ "${WIZARD_VER_430}" == "true" ] && CPPROE_VER="4.3.0"
  [ "${WIZARD_VER_420}" == "true" ] && CPPROE_VER="4.2.0"
  [ "${WIZARD_VER_370}" == "true" ] && CPPROE_VER="3.7.0"
  [ "${WIZARD_VER_364}" == "true" ] && CPPROE_VER="3.6.4"
  [ "${WIZARD_VER_363}" == "true" ] && CPPROE_VER="3.6.3"
  [ "${WIZARD_VER_3614}" == "true" ] && CPPROE_VER="3.6.1.4"
  [ "${WIZARD_VER_353}" == "true" ] && CPPROE_VER="3.5.3"
  [ "${WIZARD_VER_341}" == "true" ] && CPPROE_VER="3.4.1"
  [ "${WIZARD_VER_33}" == "true" ] && CPPROE_VER="3.3"
  DOWNLOAD_FILE="CrashPlanPROe_${CPPROE_VER}_Linux.tgz"
fi
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNO_CPU_ARCH}" == "armv5tel" ] && SYNO_CPU_ARCH="armel"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhf"
[ "${SYNOPKG_DSM_ARCH}" == "rtd1296" ] && SYNO_CPU_ARCH="armhf"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tar.xz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
OLD_JNA_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-old-${SYNO_CPU_ARCH}.tar.xz"   
OLD_JNA_FILE="`echo ${OLD_JNA_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
[ "${OLD_JNA_NEEDED}" == "true" ] && INSTALL_FILES="${INSTALL_FILES} ${OLD_JNA_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"
PUBLIC_FOLDER="`synoshare --get public | sed -r "/Path/!d;s/^.*\[(.*)\].*$/\1/"`"
#dedicated JRE section
if [ "${WIZARD_JRE_CP}" == "true" ]; then
  DOWNLOAD_URL="http://tinyurl.com/javaembed"
  EXTRACTED_FOLDER="ejdk1.8.0_151"
  #detect systems capable of running 64bit JRE which can address more than 4GB of RAM
  [ "${SYNOPKG_DSM_ARCH}" == "x64" ] && SYNO_CPU_ARCH="x64"
  [ "`uname -m`" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
  if [ "${SYNO_CPU_ARCH}" == "armel" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armv7l" ]; then
    JAVA_BINARY="ejdk-8u151-linux-arm-sflt.tar.gz"
    JAVA_BUILD="ARMv5/ARMv6/ARMv7 Linux - SoftFP ABI, Little Endian 2"
  elif [ "${SYNO_CPU_ARCH}" == "armhf" ]; then
    JAVA_BINARY="ejdk-8u151-linux-armv6-vfp-hflt.tar.gz"
    JAVA_BUILD="ARMv6/ARMv7 Linux - VFP, HardFP ABI, Little Endian 1"
  elif [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    #Oracle have discontinued Java 8 for PowerPC after update 6
    JAVA_BINARY="ejdk-8u6-fcs-b23-linux-ppc-e500v2-12_jun_2014.tar.gz"
    JAVA_BUILD="Power Architecture Linux - Headless - e500v2 with double-precision SPE Floating Point Unit"
    EXTRACTED_FOLDER="ejdk1.8.0_06"
    DOWNLOAD_URL="http://tinyurl.com/java8ppc"
  elif [ "${SYNO_CPU_ARCH}" == "i686" ]; then
    JAVA_BINARY="ejdk-8u151-linux-i586.tar.gz"
    JAVA_BUILD="x86 Linux Small Footprint - Headless"
  elif [ "${SYNO_CPU_ARCH}" == "x64" ]; then
    JAVA_BINARY="jre-8u151-linux-x64.tar.gz"
    JAVA_BUILD="Linux x64"
    EXTRACTED_FOLDER="jre1.8.0_151"
    DOWNLOAD_URL="http://tinyurl.com/java8x64"
  fi
fi
JAVA_BINARY=`echo ${JAVA_BINARY} | cut -f1 -d'.'`
source /etc/profile


pre_checks ()
{
  #These checks are called from preinst and from preupgrade functions to prevent failures resulting in a partially upgraded package
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    synoshare -get public > /dev/null || (
      echo "A shared folder called 'public' could not be found - note this name is case-sensitive. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please create this using the Shared Folder DSM Control Panel and try again." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    )

    JAVA_BINARY_FOUND=
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.tar ] && JAVA_BINARY_FOUND=true
    [ -f ${PUBLIC_FOLDER}/${JAVA_BINARY}.gz ] && JAVA_BINARY_FOUND=true
     
    if [ -z ${JAVA_BINARY_FOUND} ]; then
      echo "Java binary bundle not found. " >> $SYNOPKG_TEMP_LOGFILE
      echo "I was expecting the file ${PUBLIC_FOLDER}/${JAVA_BINARY}.tar.gz. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Please agree to the Oracle licence at ${DOWNLOAD_URL}, then download the '${JAVA_BUILD}' package" >> $SYNOPKG_TEMP_LOGFILE
      echo "and place it in the 'public' shared folder on your NAS. This download cannot be automated even if " >> $SYNOPKG_TEMP_LOGFILE
      echo "displaying a package EULA could potentially cover the legal aspect, because files hosted on Oracle's " >> $SYNOPKG_TEMP_LOGFILE
      echo "server are protected by a session cookie requiring a JavaScript enabled browser." >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi
  else
    if [ -z ${JAVA_HOME} ]; then
      echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ ! -f ${JAVA_HOME}/bin/java ]; then
      echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
      echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
      exit 1
    fi

    if [ "${WIZARD_JRE_SYS}" == "true" ]; then
      JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
      if [ ${JAVA_VER} -lt 8 ]; then
        echo "This version of CrashPlan requires Java 8 or newer. Please update your Java package. "
        exit 1
      fi
    fi
  fi
}


preinst ()
{
  pre_checks
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  if [ "${WIZARD_JRE_CP}" == "true" ]; then
    #extract Java (Web browsers love to interfere with .tar.gz files)
    cd ${PUBLIC_FOLDER}
    if [ -f ${JAVA_BINARY}.tar.gz ]; then
      #Firefox seems to be the only browser that leaves it alone
      tar xzf ${JAVA_BINARY}.tar.gz
    elif [ -f ${JAVA_BINARY}.gz ]; then
      #Chrome
      tar xzf ${JAVA_BINARY}.gz
    elif [ -f ${JAVA_BINARY}.tar ]; then
      #Safari
      tar xf ${JAVA_BINARY}.tar
    elif [ -f ${JAVA_BINARY}.tar.tar ]; then
      #Internet Explorer
      tar xzf ${JAVA_BINARY}.tar.tar
    fi
    mv ${EXTRACTED_FOLDER} ${SYNOPKG_PKGDEST}/jre-syno
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    #change owner of folder tree
    chown -R root:root ${SYNOPKG_PKGDEST}
  fi
   
  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}
  [ "${OLD_JNA_NEEDED}" == "true" ] && tar xJf ${TEMP_FOLDER}/${OLD_JNA_FILE} && rm ${TEMP_FOLDER}/${OLD_JNA_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c - | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/CrashPlanEngine ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  [ "${WIZARD_JRE_CP}" == "true" ] && echo JAVACOMMON=${JRE_PATH}/bin/java >> ${VARS_FILE}
  [ "${WIZARD_JRE_SYS}" == "true" ] && echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${CP_EXTRACTED_FOLDER}
  
  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null
  
  #amend CrashPlanPROe client version
  [ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && sed -i -r "s/^version=\".*(-.*$)/version=\"${CPPROE_VER}\1/" /var/packages/${SYNOPKG_PKGNAME}/INFO

  #are we transitioning an existing CrashPlan account to CrashPlan For Small Business?
  if [ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ]; then
    if [ -e /var/packages/CrashPlan/scripts/start-stop-status ]; then
      /var/packages/CrashPlan/scripts/start-stop-status stop
      cp /var/lib/crashplan/.identity ${PUBLIC_FOLDER}/crashplan-identity.bak
      cp -R /var/packages/CrashPlan/target/conf/ ${OPTDIR}/
    fi  
  fi

  exit 0
}


preuninst ()
{
  `dirname $0`/stop-start-status stop

  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  [ -e ${OPTDIR}/lib/libffi.so.5 ] && rm ${OPTDIR}/lib/libffi.so.5

  #delete symlink if it no longer resolves - PowerPC only
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

 exit 0
}


preupgrade ()
{
  `dirname $0`/stop-start-status stop
  pre_checks
  #if identity exists back up config
  if [ -f /var/lib/crashplan/.identity ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf/my.service.xml ]; then
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${SYNOPKG_PKGNAME}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date "+%D %I:%M%p"`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk


TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan" 
ENGINE_CFG="run.conf"
PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
DNAME="`dirname $0 | cut -f4 -d'/'`"
OPTDIR="${PKG_FOLDER}/target"
PID_FILE="${OPTDIR}/${DNAME}.pid"
DLOG="${OPTDIR}/log/history.log.0"
CFG_PARAM="SRV_JAVA_OPTS"
JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"
TIMESTAMP="`date "+%D %I:%M%p"`"
FULL_CP="${OPTDIR}/lib/com.backup42.desktop.jar:${OPTDIR}/lang"
source ${OPTDIR}/install.vars
source /etc/profile
source /root/.profile


start_daemon ()
{
  #check persistent variables from syno_package.vars
  USR_MAX_HEAP=0
  if [ -f ${OPTDIR}/syno_package.vars ]; then
    source ${OPTDIR}/syno_package.vars
  fi
  USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

  #do we need to restore the identity file - has a DSM upgrade scrubbed /var/lib/crashplan?
  if [ ! -e /var/lib/crashplan ]; then
    mkdir /var/lib/crashplan
    [ -e ${OPTDIR}/conf/var-backup/.identity ] && cp ${OPTDIR}/conf/var-backup/.identity /var/lib/crashplan/
  fi

  #fix up some of the binary paths and fix some command syntax for busybox 
  #moved this to start-stop-status.sh from installer.sh because Code42 push updates and these
  #new scripts will need this treatment too
  find ${OPTDIR}/ -name "*.sh" | while IFS="" read -r FILE_TO_EDIT; do
    if [ -e ${FILE_TO_EDIT} ]; then
      #this list of substitutions will probably need expanding as new CrashPlan updates are released
      sed -i "s%^#!/bin/bash%#!$/bin/sh%" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/cpio |cpio ) %\1/${OPTDIR}/bin/cpio %" "${FILE_TO_EDIT}"
      sed -i -r "s%(^\s*)(/bin/ps|ps) [^w][^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%\`ps [^w][^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
      sed -i -r "s%^ps [^w][^\|]*\|%ps w \|%" "${FILE_TO_EDIT}"
      sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
      sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
    fi
  done

  #use this daemon init script rather than the unreliable Code42 stock one which greps the ps output
  sed -i "s%^ENGINE_SCRIPT=.*$%ENGINE_SCRIPT=$0%" ${OPTDIR}/bin/restartLinux.sh

  #any downloaded upgrade script will usually have failed despite the above changes
  #so ignore the script and explicitly extract the new java code using the chrisnelson.ca method 
  #thanks to Jeff Bingham for tweaks 
  UPGRADE_JAR=`find ${OPTDIR}/upgrade -maxdepth 1 -name "*.jar" | tail -1`
  if [ -n "${UPGRADE_JAR}" ]; then
    rm ${OPTDIR}/*.pid > /dev/null
 
    #make CrashPlan log entry
    echo "I ${TIMESTAMP} Synology extracting upgrade from ${UPGRADE_JAR}" >> ${DLOG}

    UPGRADE_VER=`echo ${SCRIPT_HOME} | sed -r "s/^.*\/([0-9_]+)\.[0-9]+/\1/"`
    #DSM 6.0 no longer includes unzip, use 7z instead
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -d ${OPTDIR}/lib/ || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "*.jar" -o${OPTDIR}/lib/ > /dev/null
    unzip -o ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -d ${OPTDIR} || 7z e -y ${OPTDIR}/upgrade/${UPGRADE_VER}.jar "lang/*" -o${OPTDIR} > /dev/null
    mv ${UPGRADE_JAR} ${TEMP_FOLDER}/ > /dev/null
    exec $0
  fi

  #updates may also overwrite our native binaries
  [ -e ${OPTDIR}/bin/libffi.so.5 ] && cp -f ${OPTDIR}/bin/libffi.so.5 ${OPTDIR}/lib/
  [ -e ${OPTDIR}/bin/libjtux.so ] && cp -f ${OPTDIR}/bin/libjtux.so ${OPTDIR}/
  [ -e ${OPTDIR}/bin/jna-3.2.5.jar ] && cp -f ${OPTDIR}/bin/jna-3.2.5.jar ${OPTDIR}/lib/
  if [ -e ${OPTDIR}/bin/jna.jar ] && [ -e ${OPTDIR}/lib/jna.jar ]; then
    cp -f ${OPTDIR}/bin/jna.jar ${OPTDIR}/lib/
  fi

  #create or repair libffi.so.5 symlink if a DSM upgrade has removed it - PowerPC only
  if [ -e ${OPTDIR}/lib/libffi.so.5 ]; then
    if [ ! -e /lib/libffi.so.5 ]; then
      #if it doesn't exist, but is still a link then it's a broken link and should be deleted first
      [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
      ln -s ${OPTDIR}/lib/libffi.so.5 /lib/libffi.so.5
    fi
  fi

  #set appropriate Java max heap size
  RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
  if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80
  elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192
  elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384
  elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512
  elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024
  fi
  if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
    JAVA_MAX_HEAP=${USR_MAX_HEAP}
  fi   
  if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
    #can't have a max heap lower than min heap (ARM low RAM systems)
    $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
  fi
  sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
  
  #disable the use of the x86-optimized external Fast MD5 library if running on ARM and PPC CPUs
  #seems to be the default behaviour now but that may change again
  [ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
  if [ "${SYNO_CPU_ARCH}" != "i686" ]; then
    grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
  fi

  #move the Java temp directory from the default of /tmp
  grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
   || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

  #now edit the XML config file, which only exists after first run
  if [ -f ${OPTDIR}/conf/my.service.xml ]; then

    #allow direct connections from CrashPlan Desktop client on remote systems
    #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
    #users report that this value is sometimes reset so now it's set every service startup 
    sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #default changed in CrashPlan 4.3
    sed -i "s/<serviceHost>localhost<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${OPTDIR}/conf/my.service.xml"
    #since CrashPlan 4.4 another config file to allow remote console connections
    sed -i "s/127\.0\.0\.1/0\.0\.0\.0/" /var/lib/crashplan/.ui_info
     
    #this change is made only once in case you want to customize the friends' backup location
    if [ "${MANIFEST_PATH_SET}" != "True" ]; then

      #keep friends' backup data outside the application folder to make accidental deletion less likely 
      sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${OPTDIR}/conf/my.service.xml"
      echo "MANIFEST_PATH_SET=True" >> ${OPTDIR}/syno_package.vars
    fi

    #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
    sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${OPTDIR}/conf/my.service.xml"

    #make sure CrashPlan is not binding to the IPv6 stack
    grep "\-Djava\.net\.preferIPv4Stack=true" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Djava.net.preferIPv4Stack=true\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
   else
    echo "Check the package log to ensure the package has started successfully, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
  fi

  #increase the system-wide maximum number of open files from Synology default of 24466
  [ `cat /proc/sys/fs/file-max` -lt 65536 ] && echo "65536" > /proc/sys/fs/file-max

  #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
  #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
  ulimit -n 65536

  #ensure that Code 42 have not amended install.vars to force the use of their own (Intel) JRE
  if [ -e ${OPTDIR}/jre-syno ]; then
    JRE_PATH="`find ${OPTDIR}/jre-syno/ -name jre`"
    [ -z ${JRE_PATH} ] && JRE_PATH=${OPTDIR}/jre-syno
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JRE_PATH}/bin/java|" ${OPTDIR}/install.vars
    
    #if missing, set timezone and locale for dedicated JRE   
    if [ -z ${TZ} ]; then
      SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
      #fix for DST time in DSM 5.2 thanks to MinimServer Syno package author
      [ -e /usr/share/zoneinfo/Timezone/synotztable.json ] \
       && SYNO_TZ=`jq ".${SYNO_TZ} | .nameInTZDB" /usr/share/zoneinfo/Timezone/synotztable.json | sed -e "s/\"//g"` \
       || SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
      export TZ=${SYNO_TZ}
    fi
    [ -z ${LANG} ] && export LANG=en_US.utf8
    export CLASSPATH=.:${OPTDIR}/jre-syno/lib

  else
    sed -i -r "s|^(JAVACOMMON=).*$|\1\${JAVA_HOME}/bin/java|" ${OPTDIR}/install.vars
  fi

  source ${OPTDIR}/bin/run.conf
  source ${OPTDIR}/install.vars
  cd ${OPTDIR}
  $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > ${OPTDIR}/log/engine_output.log 2> ${OPTDIR}/log/engine_error.log &
  if [ $! -gt 0 ]; then
    echo $! > $PID_FILE
    renice 19 $! > /dev/null
    if [ -z "${SYNOPKG_PKGDEST}" ]; then
      #script was manually invoked, need this to show status change in Package Center      
      [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
    fi
  else
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" > "${SYNOPKG_TEMP_LOGFILE}"
    echo "${DNAME} failed to start, check ${OPTDIR}/log/engine_error.log" >&2
    exit 1
  fi
}

stop_daemon ()
{
  echo "I ${TIMESTAMP} Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
  #backup identity file in case DSM upgrade removes it
  [ -e ${OPTDIR}/conf/var-backup ] || mkdir ${OPTDIR}/conf/var-backup 
  cp /var/lib/crashplan/.identity ${OPTDIR}/conf/var-backup/
}

daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}

wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

install_uifile & upgrade_uifile

[
  {
    "step_title": "Client Version Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the CrashPlanPROe client version that is appropriate for your backup destination server:",
        "subitems": [
          {
            "key": "WIZARD_VER_483",
            "desc": "4.8.3",
            "defaultValue": true
          },          {
            "key": "WIZARD_VER_480",
            "desc": "4.8.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_470",
            "desc": "4.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_460",
            "desc": "4.6.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_452",
            "desc": "4.5.2",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_450",
            "desc": "4.5.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_441",
            "desc": "4.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_430",
            "desc": "4.3.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_420",
            "desc": "4.2.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_370",
            "desc": "3.7.0",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_364",
            "desc": "3.6.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_363",
            "desc": "3.6.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_3614",
            "desc": "3.6.1.4",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_353",
            "desc": "3.5.3",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_341",
            "desc": "3.4.1",
            "defaultValue": false
          },
          {
            "key": "WIZARD_VER_33",
            "desc": "3.3",
            "defaultValue": false
          }
        ]
      }
    ]
  },
  {
    "step_title": "Java Runtime Environment Selection",
    "items": [
      {
        "type": "singleselect",
        "desc": "Please select the Java version which you would like CrashPlan to use:",
        "subitems": [
          {
            "key": "WIZARD_JRE_SYS",
            "desc": "Default system Java version",
            "defaultValue": false
          },
          {
            "key": "WIZARD_JRE_CP",
            "desc": "Dedicated installation of Java 8",
            "defaultValue": true
          }
        ]
      }
    ]
  }
]
 

Changelog:

  • 0031 Added TCP 4242 to the firewall services (computer to computer connections)
  • 0047 30/Oct/17 – Updated dedicated Java version to 8 update 151, added support for additional Intel CPUs in x18 Synology products.
  • 0046 26/Aug/17 – Updated to CrashPlan PRO 4.9, added support for migration from CrashPlan For Home to CrashPlan For Small Business (CrashPlan PRO). Please read the Migration section on this page for instructions.
  • 0045 02/Aug/17 – Updated to CrashPlan 4.8.3, updated dedicated Java version to 8 update 144
  • 0044 21/Jan/17 – Updated dedicated Java version to 8 update 121
  • 0043 07/Jan/17 – Updated dedicated Java version to 8 update 111, added support for Intel Broadwell and Grantley CPUs
  • 0042 03/Oct/16 – Updated to CrashPlan 4.8.0, Java 8 is now required, added optional dedicated Java 8 Runtime instead of the default system one including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB. Support for non-Intel platforms withdrawn owing to Code42’s reliance on proprietary native code library libc42archive.so
  • 0041 20/Jul/16 – Improved auto-upgrade compatibility (hopefully), added option to have CrashPlan use a dedicated Java 7 Runtime instead of the default system one, including 64bit Java support on 64 bit Intel CPUs to permit memory allocation larger than 4GB
  • 0040 25/May/16 – Added cpio to the path in the running context of start-stop-status.sh
  • 0039 25/May/16 – Updated to CrashPlan 4.7.0, at each launch forced the use of the system JRE over the CrashPlan bundled Intel one, added Maven build of JNA 4.1.0 for ARMv7 systems consistent with the version bundled with CrashPlan
  • 0038 27/Apr/16 – Updated to CrashPlan 4.6.0, and improved support for Code 42 pushed updates
  • 0037 21/Jan/16 – Updated to CrashPlan 4.5.2
  • 0036 14/Dec/15 – Updated to CrashPlan 4.5.0, separate firewall definitions for management client and for friends backup, added support for DS716+ and DS216play
  • 0035 06/Nov/15 – Fixed the update to 4.4.1_59, new installs now listen for remote connections after second startup (was broken from 4.4), updated client install documentation with more file locations and added a link to a new Code42 support doc
    EITHER completely remove and reinstall the package (which will require a rescan of the entire backup set) OR alternatively please delete all except for one of the failed upgrade numbered subfolders in /var/packages/CrashPlan/target/upgrade before upgrading. There will be one folder for each time CrashPlan tried and failed to start since Code42 pushed the update
  • 0034 04/Oct/15 – Updated to CrashPlan 4.4.1, bundled newer JNA native libraries to match those from Code42, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced yet another requirement for the client
  • 0033 12/Aug/15 – Fixed version 0032 client connection issue for fresh installs
  • 0032 12/Jul/15 – Updated to CrashPlan 4.3, PLEASE READ UPDATED BLOG POST INSTRUCTIONS FOR CLIENT INSTALL this version introduced an extra requirement, changed update repair to use the chrisnelson.ca method, forced CrashPlan to prefer IPv4 over IPv6 bindings, removed some legacy version migration scripting, updated main blog post documentation
  • 0031 20/May/15 – Updated to CrashPlan 4.2, cross compiled a newer cpio binary for some architectures which were segfaulting while unpacking main CrashPlan archive, added port 4242 to the firewall definition (friend backups), package is now signed with repository private key
  • 0030 16/Feb/15 – Fixed show-stopping issue with version 0029 for systems with more than one volume
  • 0029 21/Jan/15 – Updated to CrashPlan version 3.7.0, improved detection of temp folder (prevent use of /var/@tmp), added support for Annapurna Alpine AL514 CPU (armhf) in DS2015xs, added support for Marvell Armada 375 CPU (armhf) in DS215j, abandoned practical efforts to try to support Code42’s upgrade scripts, abandoned inotify support (realtime backup) on PowerPC after many failed attempts with self-built and pre-built jtux and jna libraries, back-merged older libffi support for old PowerPC binaries after it was removed in 0028 re-write
  • 0028 22/Oct/14 – Substantial re-write:
    Updated to CrashPlan version 3.6.4
    DSM 5.0 or newer is now required
    libjnidispatch.so taken from Debian JNA 3.2.7 package with dependency on newer libffi.so.6 (included in DSM 5.0)
    jna-3.2.5.jar emptied of irrelevant CPU architecture libs to reduce size
    Increased default max heap size from 512MB to 1GB on systems with more than 1GB RAM
    Intel CPUs no longer need the awkward glibc version-faking shim to enable inotify support (for real-time backup)
    Switched to using root account – no more adding account permissions for backup, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow backups between certain times of day only, saving RAM when not in use
    Daemon init script now uses a proper PID file instead of Code42’s unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Removal of bash binary dependency now Code42’s CrashPlanEngine script is no longer used
    Removal of nice binary dependency, using BusyBox equivalent renice
    Unified ARMv5 and ARMv7 external binary package (armle)
    Added support for Mindspeed Comcerto 2000 CPU (comcerto2k – armhf) in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in DS415+
    Added support to choose which version of CrashPlan PROe client to download, since some servers may still require legacy versions
    Switched to .tar.xz compression for native binaries to reduce web hosting footprint
  • 0027 20/Mar/14 – Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 16/Feb/14 – Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 30/Jan/14 – glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 30/Jan/14 – Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 30/Jan/14 – Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 10/Jun/13 – Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 01/Mar/13 – Updated CrashPlan to version 3.5.2
  • 0020 21/Jan/13 – Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 30/Jan/12 – intial public release
 
 

6,692 thoughts on “CrashPlan packages for Synology NAS

  1. catmambo

    Can’t for the life of me get the Java installation to install correctly. I have the exact same file that the popup tells me =, in the shared ‘public’ folder, yet it keeps saying it can’t see it…what am I doing wrong, or can someone share a link to the correct version which has worked on theirs . I have a DS411

    Thanks

    Reply
    1. catmambo

      Doh – Yup, I was using Chrome, once I used Firefox it worked a charm. Hope this helps someone else

      Reply
  2. bundyo

    Hmm, seems libffi.so.5 should be linked/copied to /usr/lib after Synology system update or CrashPlan won’t sync (stops watching the selected folders).

    Reply
  3. James Berry

    I have been holding off upgrading to the new DSM (I am on DSM4, but not the new patch which has just come out).

    Does everything just upgrade seamlessly or do I need to re-install crashplan and/or java afterwards?

    Reply
    1. Chris

      I had to uninstall crashplan and reinstall it. Its syncronising now. Dont know why but it appeared to run for a couple of days but never backed anything up. Just said “Waiting for backup” or something when it was set to always backup. Seems to be working now, will update when its done

      Reply
      1. bundyo

        The issue is that libffi.so.5 (or a link, didn’t check) is removed from /usr/lib after upgrade and causes the “waiting for backup” message. Symlinking/copying it back should resolve it.

      2. patters Post author

        It’s a symlink – I fixed that in the latest update. The start-stop-status script checks for it at each startup on ARM systems.

  4. Bjarne Rasmussen

    @patters.
    I think this solution is GREAT. If only I could get it working :-)
    I know you have put in a lot of work in this – but you asked me about some details regarding my problem -> https://pcloadletter.co.uk/2012/01/30/crashplan-syno-package/comment-page-3/#comment-3632 … and I hope my answer leads to a new version/script as you mentioned.

    But is there any status on the project – or can I do anything to help / give you more information?

    Kind Regards

    Reply
    1. patters Post author

      Have a try with the new version – it doesn’t assume the existence of /volume1 so hopefully it should work for you now.

      Reply
  5. patters Post author

    I have incremented all the builds with quite a few enhancements – see the changelog at the bottom of the blog post.

    The CPPROe client package is updated to version 3.2.1. Don’t upgrade unless your storage provider is running that version on their backend. If you need the older CPPROe packages, they can be downloaded and installed manually from:
    http://dl.dropbox.com/u/1188556/blog/old/crashplanproe20100308-88f8281-003.spk
    http://dl.dropbox.com/u/1188556/blog/old/crashplanproe20100308-x86-003.spk

    Reply
    1. Bjarne Rasmussen

      GREAT WORK !!
      I had never hoped that a fully functional solution would “pop up” just like that :-)
      This was a really nice surprise – Thank you very much!

      My disfunctional solution – that gave me a message “Wait a few seconds, then stop and restart the package to allow desktop client connections.” simply just needed an update – and then it works,

      Hands down – patters you are the man!

      I owe you a cold one – for sure.

      I hope that everyone else enjoys this as much as me ….. THUMBS UP !!!!

      Reply
      1. patters Post author

        No probs. Cold ones gladly received using the PayPal donate button on the right hand side of this page :)

    1. patters Post author

      I don’t think there’s much that can be done. I’ve just been researching it and I see that inotify support was integrated from glibc 2.4 onwards, but the Intel synos use 2.3.6. The ARM synos have glibc 2.5 which is why they’re fine.

      There is a patch to add it to 2.3.6, but you’d have to recompile glibc which is out of my depth:
      http://www.linuxfromscratch.org/patches/lfs/6.2/glibc-2.3.6-inotify-1.patch

      I tried to compile glibc once and got tied in knots. If someone has the knowledge it could be interesting to try, though I guess it is possible that Synology already built it with this patch included. Perhaps it’s just CrashPlan’s version-matching that is denying us the functionality. It’s odd that the /proc/sys/fs/inotify folder exists on Intel synos if inotify support wasn’t included. Perhaps someone who owns an Intel syno can compile inotify-tools and see if they work.

      Reply
      1. rwojo

        Before I posted this I did indeed get inotify-tools to compile and run successfully on my Intel DS411+. It works great on glibc =2.4. The options I see are:

        1) Update glibc: I’m not touching it :) I opened a ticket with Synology and I’ll wait. It’s too dangerous to upgrade the entire OS glibc, and it’s just too much work to get it to compile and try with LD_LIBRARY_PATH just for CP.

        2) Spoof the version of glibc from gnu_get_libc_version, add the APIs as necessary using LD_PRELOAD. This could work, maybe I’ll try it.

        3) Hack up CP to remove the glibc checks, but who knows what it would rely on that is missing from glibc 2.3.6 and only exists in glibc 2.4.

        3) Wait for Synology to upgrade DSM 4.x to glibc 2.4+

      2. patters Post author

        I like the look of option 2 also. Keep us posted!
        I also raised a support request, so we may find out the official reason.

        EDIT – just noticed, do some Intel synos have glibc 2.3.6 and others have 2.4 then?

      3. patters Post author

        I got a response from CrashPlan support yesterday, but they didn’t answer my question (which glibc is the minimum requirement?). It was just a “headless systems are not supported” :(

      4. Bjarne Rasmussen

        @patters
        I’ve got a DS411+II intel syno. How can I check version of glibc so I can add info to your question?

      5. rwojo

        Yay, method #2 works. It will notify me of file changes instantly in the GUI when I modify the filesystem now.

        However, the counts are weird! For example if I just copy and paste the file, the count of Todo files is 1. If I paste again, it goes up to 4? Something must be wrong with the counts in the CP engine/GUI. Can someone verify that it happens on the ARM version on Synology NASes, too?

        I’ve posted the code and binary at https://github.com/wojo/synology-x86-glibc-2.4-shim

        I have to run out for a while, but everything looks great from initial testing. Perhaps you can create a beta package and early adopters can bang on it?

      6. rwojo

        Interesting, the file count is based on inotify messages for nearly anything it seems, even accessing the directory. Hah weird, probably should notify CrashPlan if it indeed a bug because it artificially inflates the count.

        Of course I’m curious if this happens on the ARM version, too. When I have time I’ll very on my Macs, too.

      7. patters Post author

        I have just found time to check this on mine. I get true counts on ARM. Replacing a file, or accessing it does not increase the count. I thought for a minute that the count did go up to 2 unexpectedly for a single jpg I pasted from a Windows machine, but that was because Windows created a thumbs.db file when I viewed it. Are you sure your testing didn’t have something similar clouding the result (a Mac creating those hidden files in each folder you accessed for instance)?

      8. rwojo

        Oh, and to your question, no, the comment I posted originally was supposed to say glibc less than 2.4, specifically 2.3.6 works with inotify. I must have just typo’d that because less than or equal doesn’t make sense either :) Does this accept HTML? Tests: < < test

      9. rwojo

        Counts are fine when I do things with ‘touch’ for example. It must be weird things that OS X does (like the .DS_Store files, etc). I did see a few rapid new files not get counted until the next update, but the count was always eventually spot on.

      10. patters Post author

        Great, that’s good news. On account of how hard all this was getting to maintain, I spent a ridiculous amount of time yesterday rewriting all the scripts and merging them down to just two – using superzebulon’s method, where each script is just a stub that invokes the main one with:
        #!/bin/sh
        . `dirname $0`/installer.sh
        `basename $0`

        At one point a stray backtick in the main script cost me several hours! The flip side of unification is that a small error in an unrelated area of the script can cause problems in another, and you get no help debugging.

        Now all product versions (CP, CP PRO, CP PROe – on ARM and intel) use the same unified scripts, just with different vars defined at the top. I implemented your suggestions and vastly improved the upgrade process. Logs are migrated, so is the cache (no more rescanning), and you no longer have to log back in with the client afterwards.

        So I just need to incorporate your glibc version shim. I’ll put something up for testing soon.

  6. rwojo

    For NAS units with more than 1GB (I have my DS411+ upgraded to 2GB) it’d be nice to bump the Java heap size to 1GB, something like this in the start-stop-status:

    if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80M
    elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192M
    elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384M
    elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512M
    elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024M
    fi

    NAS units usually have a lot of daa on them and this helps CP maintain data structures for such a large amount of files and folders it seems. Perhaps this can be configured somehow in the GUI someday, or able to be overridden from a file that sticks around between upgrades?

    Reply
    1. rwojo

      Hmm I think I found a bug in the start-stop-status script. My default CP run.conf contains 512m, not 512M. Therefore the sed line isn’t working like it should replacing that value. This should either be made case insensitive or just match the CP default of 512m and replace with 80m, 192m, etc.

      Reply
      1. patters Post author

        I’ll add that 1024 heap to the next version. Are you sure about that 512m value though? The ARM and Intel packages pull the same .tgz installer file from crashplan.com and mine substituted ok to 192M. I can make it case insensitive in future though, just in case.

      2. rwojo

        Hah, while doing testing I saw it change from ‘m’ to ‘M’. I have no idea why. Probably safe to just match [mM] :)

      3. rwojo

        After testing, it turns out 1GB adds quite a bit of memory pressure on my box because of what I’m running.

        Do you have a suggestion on being able to configure the max heap amount that would be user configurable and sticky between upgrades? Perhaps a config file that can be placed somewhere?

        For now, I’m moving it back to 768mb or so.

      4. rwojo

        Also, besides being able to source JAVA_MAX_HEAP from somewhere else, it looks like you can just change the JAVA_MAX_HEAP=xxxMB and rerun the script because sed is expecting to replace 512m to something else. I changed that to the following:

        ed -i “s/-Xmx[0-9]\+[mM]/-Xmx${JAVA_MAX_HEAP}/g” ${SYNOPKG_PKGDEST}/bin/run.conf

  7. Einstijn

    Amazing job and good instructions.

    Instructions work for crashplan cient 3.2.1 (JAVA7) on a DS212+, DSM 4.0-2219. Uploading max 3,0Mbps takes CPU to 90% (setting).

    Is it possible to do any optimizing settings on the DS212+ as well?

    Reply
    1. patters Post author

      The CPU use is caused by the hashing, compressing and de-dupe during backup. Once the seeding is done that will tail off. There’s not much to optimize really.

      Reply
      1. Einstijn

        I was hoping for better upload speeds with my 50/50 connection then 2-3MBps, that’s all……

  8. patters Post author

    I believe I have finally solved a longstanding issue for some of you who were consistently reporting that you had to adopt your system every time you restarted the package. Well I got this problem myself after updating to version 007 and it took me a while to figure out.

    CrashPlan will save its .identity file in one of two locations. Firstly it will try to create /var/lib/crashplan/.identity. If that write operation fails, it will fall back to ~/.crashplan/.identity, which is the crashplan user’s home directory (usually /volume1/homes/crashplan).

    It seems that if both of these files exist with different contents, you get this warning everytime the console starts:
    Logged out by authority. Logged in on another computer.

    …and you have to log in and adopt your existing backup records again. My guess is that those of you with /var/lib/crashplan/.identity present are people who had been running a manual install of CrashPlan, since my package never had write access to this directory until version 007. So the fix for this issue will be to delete this folder if it’s present.

    Reply
    1. rwojo

      I ran into the same issue from manually running the CP engine from the command line (which I don’t do anymore!) while doing testing for the x86 inotify issue.

      Solving it was done by removing one of the .identity files as well, however the two that seem to be tested are /var/services/homes/crashplan/.crashplan/.identity and /var/lib/crashplan/.identity. I don’t see the usage of ~crashplan/.identity in the pecking order in service.log.0.

      Reply
      1. patters Post author

        By ‘~’ I was meaning the daemon user’s home directory (which is /var/services/homes/crashplan).
        Good point about manual startups – I’ll move that logic to the start-stop-status script instead of postinst then, so it deals with that case every startup. Did you see my update about the inotify counts?

  9. Graham Wheeler

    Since the latest update, I find that my backups are running really slowly. I have the network bandwidth set to 1Mbps when “user present” and 2Mbps when “user away”, with 50% CPU allowed. I have 25Mbps symmetrical internet connectivity. But I am only seeing about 180kbps transfer rates now, with my backup completion estimated in about 2 years from now. Is anyone else seeing this?

    Reply
  10. patters Post author

    New version out now. This one’s got Real-Time Backup support for Intel at last, thanks to rwojo. I also unified the scripts – it was getting too tricky to maintain 6 different versions. The upgrade process is way better too. No more entering your password, or re-scanning. See changelog for more details.

    Reply
    1. Michael Maillot (@Mmaillot)

      Hi,

      Nice work. Just a quick remark / advice to other users: I used to backup my whole NAS thanks to this Crashplan Package. The backup went almost complete, except for 2 files. Because of those 2 files, the backup would fail every 15′ and try to backup again in 15′. For a reason I do not know – maybe those files are locked in a certain manner – Crashplan cannot backup aquota.user and aquota.group.

      I simply unchecked those 2 files and now everything is fine.

      Michaël.

      Reply
      1. Richard

        Those 2 files are from the Syno itself. I added them to my exceptions. Otherwise it will try over and over and over and over and over and………

  11. Ingmar Verheij

    Hi,

    Great job! This is way much easier than the installation guides found on the internet!

    I’ve installed CrashPlan Pro (3.2-008, today) and tried connecting from Windows (through an SSL tunnel) but the client (3.8.2010) keeps saying “CrashPlan has been disconnected from the backup engine.”

    The following message is logged in service.log.0

    [05.02.12 21:26:07.576 WARN Sel-UI-R com.code42.nio.net.Connection ] Error building message: Unable to deserialize CommandMessage, IOException in uncompressObject
    [05.02.12 21:26:07.580 WARN Sel-UI-R com.code42.nio.net.Factory ] read() Exception com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null] , Context@10901009[/127.0.0.1:4243->/127.0.0.1:51315], com.code42.nio.net.Factory$ReadListener@1cc7c50, com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
    com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
    at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:283)
    at com.code42.messaging.nio.MessageConnection.enqueueMessages(MessageConnection.java:171)
    at com.code42.messaging.nio.MessageConnection.addMessage(MessageConnection.java:152)
    at com.code42.messaging.nio.MessageConnection.access$000(MessageConnection.java:51)
    at com.code42.messaging.nio.MessageConnection$MessageBuffer.read(MessageConnection.java:754)
    at com.code42.messaging.nio.MessageConnection.read(MessageConnection.java:680)
    at com.code42.nio.net.Factory$ReadListener.processKeys(Factory.java:769)
    at com.code42.nio.SelectorEngine.run(SelectorEngine.java:142)
    at java.lang.Thread.run(Thread.java:722)
    Caused by: com.code42.exception.DebugRuntimeException: Unable to deserialize CommandMessage, IOException in uncompressObject, CommandMessage[null]
    at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:71)
    at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:274)
    … 8 more
    Caused by: com.code42.io.CompressionIOException: IOException in uncompressObject
    at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:237)
    at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:66)
    … 9 more
    Caused by: java.util.zip.ZipException: Not in GZIP format
    at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
    at java.util.zip.GZIPInputStream.(GZIPInputStream.java:78)
    at java.util.zip.GZIPInputStream.(GZIPInputStream.java:90)
    at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:231)
    … 10 more

    [05.02.12 21:26:07.582 INFO Factory$Notifier-UI0 com.backup42.service.ui.UIController ] UISession Ended after less than a minute – 528883553596342675

    Have you got any clues?
    PS: It’s on a DS212+

    Reply
    1. Ingmar Verheij

      I’ve found the reason why I got the error above. The CrashPlan client installed via your package is 3.2-008 while I’ve got 2010.03.08 installed on my desktop. After upgrading my desktop client to the same version this problem was solved.

      The reason I installed the older version was because this is mandatory for the Dutch storage provider ProBackup (http://crashplan.probackup.nl/). I wrote a blog – http://t.co/2dQLpUg4 – about how to downgrade your package to the version (and configuration) used by them.

      Reply
      1. patters Post author

        A bit further back in the comments I posted the download URLs for the older 2010 version of the PROe package I had made.

  12. Robin

    I cannot see the crashplan package in my list. I can see the Minecraft, Craftbucket, openremote, and the subsonic-backup packages, but no other ones
    I am using a Synolgy DS209+II with DSM 4

    Reply
      1. Sven

        Hi patters,
        I am receiving the same error (CrashPlan has been disconnected from the backup engine) also on a DS412+ with DSM 4.0 2228.
        Clean install, netstat shows:
        DiskStation> netstat -an | grep ‘:424.’
        tcp 0 0 0.0.0.0:4243 0.0.0.0:* LISTEN
        tcp 0 0 127.0.0.1:4243 127.0.0.1:36612 TIME_WAIT

        I am still using the “old” CrashPlanProe-Version 3.8.2010, because the CrashPlanPro-Server isn`t updated to the new 3.2-version.

        I opened a tunnel (the same way I do with every DS I used before).

        The only difference in my installation is the use of the new Java-Package from Your site and the new version from Oracle:
        ejre-1_6_0_32-fcs-b05-linux-i586-headless-05_apr_2012.tar.gz

        service.log.0 shows the following after starting the CrashPlanClient (GUI) on my Mac:

        [05.17.12 02:07:22.171 WARN Sel-UI-R com.code42.nio.net.Connection ] Error building message: Unable to deserialize CommandMessage, IOException in uncompressObject
        [05.17.12 02:07:22.173 WARN Sel-UI-R com.code42.nio.net.Factory ] read() Exception com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null] , Context@22823147[/127.0.0.1:4243->/127.0.0.1:48494], com.code42.nio.net.Factory$ReadListener@1fa5e5e, com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
        com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
        at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:283)
        at com.code42.messaging.nio.MessageConnection.enqueueMessages(MessageConnection.java:171)
        at com.code42.messaging.nio.MessageConnection.addMessage(MessageConnection.java:152)
        at com.code42.messaging.nio.MessageConnection.access$000(MessageConnection.java:51)
        at com.code42.messaging.nio.MessageConnection$MessageBuffer.read(MessageConnection.java:754)
        at com.code42.messaging.nio.MessageConnection.read(MessageConnection.java:680)
        at com.code42.nio.net.Factory$ReadListener.processKeys(Factory.java:769)
        at com.code42.nio.SelectorEngine.run(SelectorEngine.java:142)
        at java.lang.Thread.run(Thread.java:662)
        Caused by: com.code42.exception.DebugRuntimeException: Unable to deserialize CommandMessage, IOException in uncompressObject, CommandMessage[null]
        at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:71)
        at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:274)
        … 8 more
        Caused by: com.code42.io.CompressionIOException: IOException in uncompressObject
        at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:237)
        at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:66)
        … 9 more
        Caused by: java.io.IOException: Not in GZIP format
        at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:141)
        at java.util.zip.GZIPInputStream.(GZIPInputStream.java:56)
        at java.util.zip.GZIPInputStream.(GZIPInputStream.java:65)
        at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:231)
        … 10 more

        [05.17.12 02:07:22.174 INFO Factory$Notifier-UI0 com.backup42.service.ui.UIController ] UISession Ended after less than a minute – 530941240467324793
        [05.17.12 02:07:22.175 INFO Factory$Notifier-UI0 com.backup42.common.command.CliExecutor ] RUN COMMAND: auto.idle
        [05.17.12 02:07:22.175 INFO Factory$Notifier-UI0 backup42.service.backup.BackupController] UI:: AUTO IDLE… lowBandwidth=0 B/s, activeThrottleRate=20

        Thanks in advance

      2. Sven

        patters, You mentioned, You don’t have access to an Intel-based Synology-DS.
        If You would like, I can give You access to a brand new DS412+. It is currently not in use. Please contact me via my mail address (merckenssc@me.com), if this would help You.

        I can`t solve the error (losing connection to the backup engine; it doesn´t matter, if I use a tunnel or a modified client (redirected to the IP of the Diskstation).

        Thanks in advance

      3. Sven

        patters:
        Could it be a problem of the “shown” version?
        the CP-app.log shows:
        CPVERSION = 3.2.1 – 1332824401321
        which is wrong
        I installed 3.8.2010 and this version shows on other installations (older java-package):

        CPVERSION = 1268066820719 (2010-03-08T16:47:00:719+0000)

        Thanks in advance

      4. Sven

        I think reading helps… especially me ;)
        The new CrashPlanPROe-package-installer downloads the CPClient from the Code42-Webpage. Ok… that isn`t a “wanted” feature or better, their should be an option to install the old version (because the 3.2 (new) and 3.8.2010 (old) aren`t compatible.
        patters: how can I install the old version with You package?
        Thanks in advance

  13. msilano

    Hi Patters. Excellent work on this – this will inspire me to a) sign up for Crashplan and b) use your affiliate link. Just one small bit of strangeness:

    I installed on a new 412+. Everything is working, but looking at the process listing once Crashplan is up and running, there are…..count em….68 individual processes running, all with the java launch for Crashplan.

    I like multi-threaded programs, just didn’t expect this to be one of them. Each instance is taking 652m of virtual memory according to top.

    I’ve checked all of the launch scripts; they are working properly. Whatever is happening is within the Crashplan engine.

    Is this normal? If not, any suggestions as to how to proceed?

    Thanks,
    Mike

    Reply
    1. patters Post author

      That doesn’t sound right. There should only be one – it’s not multithreaded. Does this carry on when you reboot?

      Reply
      1. msilano

        Indeed it does.

        I would post the output of the PS command, but it is rather large. There are exactly 64 instances – strange coincidence. Each process looks like this:

        crashpla 649m S N /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPla

        That appears to match the CrashPlanEngine launch for the server. However, there aren’t multiple startups referenced in the engine_output log and the other “instances” aren’t taking CPU time, so I’m not clear as to what is happening.

        Let me know what I can do to help troubleshoot this further.

        -mike

      2. patters Post author

        Hmm strange. Can you try editing /var/packages/CrashPlan/scripts/start-stop-status and remove the LD_PRELOAD bit (the glibc shim) to see if that makes any difference? Had you ever had a manual install of CrashPlan on there?

      3. msilano

        Can’t seem to reply to the latest comment. Weird.

        In any case, the start/stop script appears to be launching once using the shim. Launching without the shim didn’t seem to make any difference.

        There was an incomplete manual install of crashplan from the synology wiki before I found your site. All remnants of that install were removed. Searches of the file system for any launch scripts or old versions show nothing.

        Thanks again for the reply and the assistance.

        -m

      4. patters Post author

        So if you kill them all then start CrashPlan in Package Center, does it immediately spawn 64 processes or do they take a while to turn up?

      5. msilano

        Very weird indeed. I can stop and start the processes using CrashPlanEngine directly; in any case, once restarted, they quickly spawn. Not all at once, but after about 1 minute, we have a full allotment of processes. Trying now to run now by just calling the engine once using the same script from CrashPlanEngine; same results.

        -m

      6. brunchto

        same for me.
        process diseappears as soon as i stop crashplan from the package center. they reappear when i restart it. there’s a root process, the others seem to be child processes

        27847 1 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27850 27847 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27853 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27854 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27856 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan

  14. patters Post author

    Anyone want to help to try and get drive spin down/sleep mode working with this? I’ve had a look at the QNAP forum (which you need to be a member of just to read) and over there someone thought it had something to do with the constant logging. I moved my log folder to /tmp and symlinked it but the drive didn’t spin down as far as I know, though my testing was pretty cursory.

    Reply
    1. eff_cee

      I thought the lack of sleep was because crashplan continues to open [/volume1/@appstore/CrashPlan/conf/default.service.xml even outside the run between times given ?

      Is this the config file where the the run between times are saved ?

      Maybe CP has to check this file to see if it has been changed ?

      Regardless, it is a pain to have to cron crashplan to get round the issue. Did you get any further on this ?

      Reply
      1. patters Post author

        No, for me it’s kind of a lost cause because Serviio also does this, even with library updates set to manual. Could be a more generic Java problem.
        I guess we could nag CrashPlan support about it, since someone reported that it used to sleep ok. However, they’re pretty consistent in stating that headless operation is not supported.

  15. msilano

    …and the plot thickens….

    The CrashPlanEngine script is indeed launching a single copy of CrashPlan; something within the jar file is triggering the additional launches.

    While this instance was behaving normally, there were another 30 processes spawned. Are we just seeing child processes spawned?

    This is an intel-based ds 412+.

    Thanks again for your help.

    -m

    bash-3.2# ./MSCrashPlanEngine start
    Starting CrashPlan Engine … Using standard startup
    JAVACOMMON /volume1/@appstore/java6/jre/bin/java
    SRV_JAVA_OPTS -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx512M -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0
    FULL_CP /volume1/@appstore/CrashPlan/lib/com.backup42.desktop.jar:/volume1/@appstore/CrashPlan/lang
    TARGETDIR /volume1/@appstore/CrashPlan
    [05.05.12 14:00:05.974 INFO main root ] Locale changed to English
    [05.05.12 14:00:05.976 INFO main root ] *************************************************************
    [05.05.12 14:00:05.977 INFO main root ] *************************************************************
    [05.05.12 14:00:05.977 INFO main root ] STARTED CrashPlanService
    [05.05.12 14:00:05.980 INFO main root ] CPVERSION = 3.2.1 – 1332824401321 (2012-03-27T05:00:01:321+0000)
    [05.05.12 14:00:05.981 INFO main root ] LOCALE = English
    [05.05.12 14:00:05.983 INFO main root ] ARGS = [ ]
    [05.05.12 14:00:05.983 INFO main root ] *************************************************************
    [05.05.12 14:00:06.222 INFO main root ] Adding shutdown hook.
    [05.05.12 14:00:06.226 INFO main root ] BEGIN Copy Custom, waitForCustom=false
    [05.05.12 14:00:06.227 INFO main root ] NOT waiting for custom skin to appear in custom or .Custom
    [05.05.12 14:00:06.227 INFO main root ] No custom skin to copy from null
    [05.05.12 14:00:06.227 INFO main root ] END Copy Custom
    [05.05.12 14:00:06.239 INFO main root ] BEGIN Loading Configuration
    [05.05.12 14:00:06.365 INFO main root ] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    md5 Loaded.
    [05.05.12 14:00:06.601 INFO main root ] Loading from my xml file=conf/my.service.xml
    [05.05.12 14:00:06.713 INFO main root ] Loading ServiceConfig, newInstall=false, version=3, configDateMs=null, installVersion=1332824401321
    [05.05.12 14:00:06.714 INFO main root ] OS = Linux
    [05.05.12 14:00:06.926 INFO main root ] AuthorityLocation@29775659[ location=central.crashplan.com:443, hideAddress=false ]
    [05.05.12 14:00:06.929 INFO main root ] END Loading Configuration
    jtux Loaded.
    ./MSCrashPlanEngine: line 8: 25100 Killed /volume1/@appstore/CrashPlan/bin/nice -n 19 $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService

    Reply
    1. patters Post author

      I have just checked against a syno at my work that’s busy doing its first seed backup (ARM, not Intel), and I can confirm that it has only launched a single process. Since I don’t have an Intel machine I don’t think I’m going to be much use in figuring this out I’m afraid.

      Reply
      1. Richard

        Whats about the multiple instances on the Intel syno’s?
        (I’m waiting with the update because of this messages)
        What is the impact on the performance or use of the Syno or Crashplan??

        Hope to hear soon!!

      2. msilano

        I’m leaning towards these being subprocesses as the total memory and CPU usage appears correct for one instance. The file CrashPlanEngine.PID (as per the launch script) contains 1 entry. And Crashplan listens as expected.

        Just my $.02.

      3. patters Post author

        I don’t have an Intel syno so I can’t really help. However, there is no fundamental change to how CrashPlan is started in my newer package version (apart from the Intel glibc shim – which msilano has confirmed doesn’t cause this behaviour). So, is there a chance that this has been happening all along?

      4. Richard

        I did the backup 15 min’s ago.
        No problem what so ever ;-)

        No error messages etc…

        But I don’t have the feeling the realtime scan is working… although I don’t get an error message…..

      5. patters Post author

        You can check whether real-time backup is working by opening the console and then changing or adding a file. You should see the console report it, and schedule it for the next backup in 15mins.

      6. Richard

        I tried that indeed, but it does not report it…
        Will try it again when getting back home.

      7. Richard

        Tried again..

        -Waited till the backup run finished
        -Added 10 documents
        -waited 45 minutes
        -no automatic selection scan :(

        I’ll stick to my hourly schedule… it works

        backup freq: 15min
        verify selection every: 1 hours

      8. Richard

        5 similar errors in thread 21, 25, 26, 32 and 33

        Exception in thread “Thread-21” java.lang.NullPointerException
        at com.code42.jna.LinuxPlatform.isSymlink(LinuxPlatform.java:271)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:89)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:116)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:116)
        at com.code42.jna.inotify.JNAInotifyWorker.run(JNAInotifyWorker.java:52)
        at java.lang.Thread.run(Thread.java:662)

  16. Jim

    I have a DS212+ that i installed this on, and i did the recent update to it the other day, and it does not seem to hold the network settings so if i change it to us my max upload speed it does not go it seems to hang out at the 300 setting.

    Reply
    1. Jim

      I discovered that having it generate thumbnails and do a back up at the same time does not work so well, once i stooped doing the thumbnails the speeds for crash plan leveled out to what i expect them to be.

      Reply
  17. david

    Is there any way to provide better instructions? I just have a new DS212j that I buy only for CrashPlan and I am stuck on the java part. The package installer says that I need to manually download it and place it to the “public” folder but I issue these commands:

    cd /
    find * | grep public

    Only to discover there is no public folder. I enable SMB and figure out how to login again to discover there is no public folder. I would love to create the public directory… but where? /public? /volume1/@tmp/public? It is very hard to follow without a detailed instruction and I hope you understand.

    So if please you can provide better detailed instruction like “place it in /var/tmp/public” that would be great!

    Reply
    1. patters Post author

      Make a new top level folder in DSM’s File Browser and call it ‘public’. That will be shared automatically, so put the downloaded file in there.

      Reply
    2. Jim

      you need to make sure that the Webdav service user under system internal users has access permissions to the public folder as well

      Reply
      1. patters Post author

        Really? Wasn’t a problem for me on a four bay syno at work, though I did add full access permissions for everyone – which is how this folder is defaulted on single bay NAS systems.

  18. David

    Does anyone know if this works with the DS212+ (513MB DDR3), or is 1GB of NAS RAM a firm requirement?

    Reply
    1. Jim

      I am using on a DS212+ and it works great! however make sure that when you are doing your initial backup or large backups to crash plan you are not taxing the CPU with making thumbnails for photo station

      Reply
      1. patters Post author

        Code42’s official requirement for CrashPlan is 512MB, but even that allows for very large backup sets. My package sizes the Java heap size appropriately for the available RAM in an effort to make sure it didn’t attempt any kind of suicidal paging to disk while backing up. Given that most home users are constrained by bandwidth I’m guessing it’s not practical for most people to backup the whole NAS. I’m only protecting around 60GB of mine, for which a Java heap of 192MB is perfectly adequate. However, at work I’m uploading 3TB of data using CrashPlan PROe from a RS411 with the same heap size. In fairness it is made up of mostly very big files though which probably reduces the overhead somewhat (master edits of videos).

      2. David

        Thanks so much — seems like a great home NAS/backup solution; the best combo I’ve seen. I’ll probably be back on this thread soon while setting mine up!

      3. Chris

        I don’t want to backup things off my DS212+ but I do want to backup to it. I would want to have several computers backup to it. Maybe 1-1.5 tb total. Anyone have experience with this? Any performance feedback would be great.

  19. Ulrik

    Just upgraded from DS508 to DS1812 last night, and now want to add Crashplan from you repo. But it fails out during installation with “There was a problem downloading CrashPlan_3.2.1_Linux.tgz from the official download link, which was….(the link is correct)”. What could be wrong?

    Reply
    1. patters Post author

      It just uses wget so maybe there was a temporary glitch with the CrashPlan.com website. I just checked and it works for me now. Is it still failing?

      Reply
      1. Ulrik

        Yes, it is still failing. Is there a log somewhere to give me an idea of what goes wrong? I can download the file manually – also with wget from a SSH connection to the DS1812.

      2. Ulrik

        OK, I got it installed now. The install failed earlier because of a disk expansion. But now I got another problem. I have stopped and restarted the CrashPlan addon on ny Syno, and edited the ServiceHost line in ui.properties, but I cannot connect with my new CrashPlan account.

  20. Rogier

    I’ve installed the Java 7 package on my DS411j in order to be able to use Crashplan. Everything is working fine, however, after some hours, Java is consuming all CPU power. Result is that I can’t connect anymore to the Crashplan installation and the NAS becomes very slow. Currently, the only remedy is to restart the NAS once a day. Does anyone else seems to have the same issues? Does anyone how to solve this? Any help is appreciated!

    Reply
    1. MJ

      I had the same problem on mine. I’m thinking it just doesn’t have enough RAM to handle the client. I plan to upgrade to a DS411 or 411+

      Reply
  21. Matthew

    Hi we are currently trying to set up crashplan pro e on a DS1512+ and we cant seem to get the client installer to find our server on the synology nas box.

    If anyone has gotten this working and could lend some assistance we would gladly pay a consulting fee.

    Mathew@northbaytek.com

    Reply
    1. patters Post author

      The naming is a little confusing. My CrashPlan PROe Synology package is only a PROe *client* (as titled Package Center) – i.e. you’d still need to pay a provider for storage hosting then use a client on a PC to get the syno to connect to that provider. That’s how I’m currently using it, connecting to http://crashplanuk.com. You don’t need to hack any files, you simply enter the connection URL into the GUI and log on when the client first connects to your syno.

      PROe server is what the storage provider would run on their hardware.

      Reply
      1. DJ Forman

        I thought you could have the ProE server running on the Synology box and use that as the storage provider?

      2. patters Post author

        You might be able to, but you would have to buy master keys from CrashPlan which I would expect to be quite expensive given that it’s basically aimed at the datacenter market.

      3. DJ Forman

        Thanks patters, we bought 40 perpetual licenses with 1 year of support. But of course they won’t support this implementation, nor will Synology. Not directly anyway.

        I friend of mine has this same setup deployed on his Synology but he had to get help from an outside source to get it completed. I’m hoping to get him to send me the instructions he follows.

        In fact his implementation is more complicated than mine because he uses 2 Synology boxes, one on each seaboard, that replicate to each other.

        Fingers crossed. If you are into a quick consulting gig to help us get this setup that would be great. Otherwise will stumble along trying to make it work.

      4. patters Post author

        I see. I could be interested, however it’s already 22:38 here in London and I’m only just leaving work! Backup Exec 2012 migration and iSCSI issues…
        If by some miracle I’m feeling up to it in the next few hours I’ll let you know.

        The only problem about outside assistance for CrashPlan implementation is that it will almost certainly fail and need additional work once a new version is released – which is why I made the packages. I guess if it’s not complicated I could end up making a PROe server package. Thing is, I can’t experiment without a master key.

      5. DJ Forman

        Backup Exec issues, bleh. Sorry for your pain. I moved to Acronis and/or StorageCraft and never looked back. I’m open to tomorrow morning your time also. I’ll stay up late on my side. I’ve got several clients in the UK and Germany that I need to work with tonight so I’ll already be up.

        I’m going to try to install the ProE server manually as if the Synology were a “real” Linux server. Supposedly this should work. By the way, I’m not cheap, I’m sure I can make some consulting worth your while.

  22. DJ Forman

    We got the Crashplan ProE headless client installed on a Synology NAS. It looks like it’s listening on 4243. But no matter what I try I can’t get the Crashplan Windows client to connect. It just spits out an error about not being able to connect to port 443 on the IP of the NAS.

    I changed ui.properties, but not sure if I edited it properly. It’s just a single line with fields seperated by the # sign. I entered to additional lines to the bottom to point to the server and port. No luck.

    Could use some help. Happy to pay for some consulting. I need this up and running tonight so I can deploy 40 users. I’m pretty sure it’s just a minor mis-config.

    djforman1(at)yahoo.com

    Reply
    1. patters Post author

      Hmm this comment above isn’t threaded properly so it’s going to be confusing to read…

      As per the posts above, PROe client can’t really be used like this. I would guess that to achieve what you’re aiming to do, you might be able to use the normal CrashPlan package on the Syno then run the normal CrashPlan clients on your 40 computers. Then look at the ‘backup to a friend’ option, taking the friend code from the syno’s CrashPlan instance.

      I would guess that there is an upper limit on the number of friend connections though, and 40 is likely to be on the high side.

      Reply
  23. OjaSapNL

    Hello,

    For everybody that hates that your Synology harddrives don’t hibernate anymore I have created an solution. First I was using the Crontab solution, but that didnt fit. If i have an large backup and it dont fit the time window the backup will not be completed, and if u have an large one your harddrives stay online to long.

    I created an Python script that reads the CrashPlan logfile every 5 minutes. In the script you fill in the time that the CrashPlan service need to be started. Then it starts it, and reads every 5 minutes if its completed. Then It sends an e-mail with the lines of the logfile, and stops the CrashPlan service. I’am currently testing it. If anybody is interested, tell me and I will give u the script.

    OjaSapNL

    Reply
    1. TopL

      Hey OjaSapNL,

      I’m interested. I’m whittling down why my NAS isn’t hibernating and isolated it down to CrashPlan. If you can provide details on how your script works, that’ll be great!

      You can ping me at (please replace the splats accordingly): tl.2012 * xemaps * com

      Thanks!
      TopL

      Reply
  24. Ronald Rademaker

    Hi,

    How can I check the status of my Crashplan in a command shell ? Which location/command is needed to be used?

    Confused as I also Have my old install which give me the error:

    DiskStation> ./crashplan status
    Could not find JAR file /opt/crashplan/bin/../lib/com.backup42.desktop.jar

    The new one using this package can be stopped and started in the Package Center without issue and backup is working properly

    Thx,
    Ronald

    Reply
    1. patters Post author

      From memory, I think the CrashPlan launcher script (which my package’s start-stop-status script invokes) expects you to be in the program folder since it contains relative paths. I don’t think their launcher has a status function so I would suggest you try this:
      cd /volume1/@appstore/CrashPlan
      /var/packages/CrashPlan/scripts/start-stop-status status && echo running || echo stopped

      As you can see, packages themselves are in the @appstore folder, but the metadata from Package Center and the package scripts end up in /var/packages.

      Reply
  25. Darcy

    Fantastic work creating this package! I’ve been running some test backups on a 30-day trial, and it’s been great. Will sign up shortly with your affiliate link :)

    Reply
  26. Andrew Stuckey

    Hey Patters. Thanks so much for your work on this. Hope you can help us out with our setup.
    Will definitely give a donation if you can help us get our backups working ;). There you have that in writing!

    We bought a DS411j as our main data storage and file server. We also bought a Drobo (DAS version) to use as remote backup of the NAS using Crashplan. The plan is to seed the initial backup with the Drobo connected directly to the NAS, then move the Drobo to a friend’s house and continue with incremental backups over the internet.

    So far we’ve managed to install your CP package on the DS411j and have successfully run some test backups to the Drobo while connected directly to the NAS via USB. To do this we’re running the CrashPlan client on an iMac on the same LAN as the NAS by changing the host IP.

    Then in order to test the seeded backup over the network we unplug the Drobo from the NAS and connect it to the iMac. However CrashPlan won’t allow us to select a network drive as the backup destination. In fact, no networked machines or drives even appear in the list. We’re only able to select a local folder on the NAS or the Drobo when connected directly to the NAS.

    Are we going about this the wrong way? Have we missed a step?
    Very disheartening to have come this far only to be stumbling at the last hurdle.

    Any help would be very appreciated.
    cheers

    Reply
    1. patters Post author

      You would need to get the iMac CrashPlan GUI client connecting to the Backup Engine instance running on the iMac (so undo your conf/ui.properties modification). This iMac would then be acting as a separate CrashPlan setup (like the one you want to run at your friend’s house). I haven’t tried this, but I would imagine that you would get the friend code from that setup. Then I think you would plug in (on the iMac) the Drobo you already seeded from the NAS, and in the advanced backup destinations options (I’m not near a CrashPlan GUI to check) you can set the actual folder on disk where the friends backups are saved. I think you just attach the seeded folder that’s on the Drobo (it should be a local drive since it’s DAS and it’s plugged into the iMac). Then I’m guessing that you would once again edit ui.properties to switch the GUI client back to connect to the Synology and add a backup destination using the friend code. Hopefully it should just be aware of the existing seed backup, since the GUID will be recognised. This is all untested though. I personally just pay for CrashPlan+

      Reply
      1. Andrew Stuckey

        Thanks. We considered CrashPlan+ but with > 4TB of data to backup and being in Australia we unfortunately can’t use CrashPlan’s seeding service, which makes it very impractical. To do the initial backup over the net would probably take several months!

        I’ll play around with the CP client on the iMac and see if we can get it to work… stay tuned.

      2. Andrew Stuckey

        Thanks Patters. We seem to have got it working!
        I seeded a small backup, then moved the external drive to a second Mac (on our CP account) and added the backup archive to the Mac through the CP client. It instantly recognised that the backup had come from the NAS and prompted to start synchronising. Great. Lets hope it works over the internet to a friends computer once we’ve finished the initial seed.

        Resuming the seed backup…

        New issue…

        It’s backing up at ridiculously SLOW speeds and I’m having trouble isolating the bottleneck.

        Our setup is the DS411j running 4 x new 3TB Seagate Barracuda 7200rpm drives connected to a Gigabit Apple Airport Extreme. I’ve stopped all packages except CrashPlan and your Java plugin. There are also no Time Machine backups or any other processes running (as far as I’m aware).

        With the backup drive directly connected to the NAS via usb2 it’s transferring data at an agonising slow 6-9 Mbps (a far cry from the theoretical 480 Mbps of usb).

        With the backup drive connected to the Mac (which is then connected to our Gigabit Airport Express by Cat6 cable) we get even slower speeds of around 3-5 Mbps or an average of roughly 600 KB/s looking at the DSM resource monitor.

        So not even 1MB/s which is completely useless and a very far cry from the 29-85 MB/s benchmarks posted on the Synology site http://www.synology.com/products/performance.php?lang=enu.

        I haven’t done any other performance tests at this stage other than via CrashPlan, so I don’t have anything to benchmark it by. But do you have any immediate suggestions? With no other processes running could it be an issue with CrashPlan?

        Could it possibly be an AFP or SMB issue? Although this would not explain the terrible speed when connected directly to the NAS usb port.

        There also appears to be a common speed issue between Synology and Macs as documented here (http://forum.synology.com/enu/viewtopic.php?f=14&t=34172&hilit=SLOW). Maybe this might shed some light on the issue.

        Any help specifically with identifying potential problems running CP on the NAS would be much appreciated.

        cheers
        Andrew

      3. patters Post author

        Try disabling de-dupe and compression in the advanced backup options. For the ARM CPUs I think it’s a big performance hit. I hate to say it, but I think your DS411J won’t have enough RAM for such a large backup set. The default CrashPlan heap is 512MB and for backing up several terabytes, Code42 support have advised several people to use 1024MB. Only the Intel Synology models have this much RAM (Expandable to 3GB I think).

      4. Andrew Stuckey

        Have now tested the external drive using FW800 and unfortunately speeds are still the same dreary 3-5 Mbps, so we can probably rule out the usb as the bottleneck.

        For all test so far NAS CPU has only averaged 40% and RAM 60%.

      5. Andrew Stuckey

        Where are the advanced backup options? In the DSM or CrashPlan? Can’t find them.

        The RAM issue you’ve mentioned with the ARM machines seems stupid. Does this only apply to CrashPlan? And if so why does CP suck so much RAM? or are most other backup applications the same?

        There must be a viable remote backup solution for the Non-Intel models. If not Crashplan then what else?

      6. patters Post author

        CrashPlan GUI -> Settings tab -> Backup -> Advanced settings.

        CrashPlan uses RAM for keeping track of files’ checksums, block hashes, and other metadata I would guess. With several terabytes in your backup set, it’s not difficult to see how even a pretty efficient engine would need a fair amount of RAM. There is a cloud option among the official Synology apps (HiDrive), though I have seen people commenting on its lack of reliability. I have amended the Notes section to draw attention to the RAM issue in case people are shopping for Synos with the express intention of using CrashPlan for large backup sets.

        I agree it is a shame that the ARM models don’t have a DIMM slot, but then they are very cheap embedded systems that are primarily designed to do one thing, serve files – not run applications. That’s a bonus.

      7. Andrew Stuckey

        Meh… the Advanced settings are disabled on the Free plans. Seems like a CP+ feature. oh well. But you’re right about the RAM issue, as soon as I launch CP my whole system chokes. “they are primarily designed to do one thing, serve files… not run applications.” Wish I knew this before I ordered!

        So what I’m thinking now is to buy an Syno Intel NAS with more RAM, sell the brand new Drobo and keep the DS411j as the offsite backup machine. Which model Syno would you recommend buying? It’s not clear from the specs on the Synology site which models are Intel vs ARM. We’ll need something with enough grunt to run 3-4 applications simultaneously including TimeMachine and offsite backups without compromising the NAS performance. Clearly the DS411j is not up to the job which is disappointing. Will it be fine as a backup unit?

      8. Andrew Stuckey

        Another thought…when did Synology start using Intel processors? If we don’t want to spend a heap, would be cost effective to buy a 2010-11 model with the power we need? Happy to consider older models.

  27. Samuel Manso (@samukas_m)

    Hello patters! Recently I had a problem with my crashplan installation not backing up and I was told by crashplan to:
    “2. Edit the below line in /usr/local/crashplan/bin/run.conf
    3. Find this line (near SRV_JAVA_OPTS): -Xmx512m
    4. Edit to something larger such as 640, 768, 896, or 1024. E.g.: -Xmx1024m”

    I did that, and I also changed the value set in “/volume1/@appstore/CrashPlan/syno_package.vars” as specified in your post.

    It seems to have worked, but I’m wondering if it was really needed to change the value in both, or if the the “vars” file would be enough.

    Anyway… a little donation coming your way (wish I could give more)… but it’s well deserved! Thanks to your package I’m backing up 3TB online to crashplan.

    Reply
    1. patters Post author

      I’m assuming you do actually have more than 1024MB of RAM in your syno? If not then the performance may get pretty bad as it does loads of paging to disk. Are you running on Intel?
      Oh, and you only need to change the value in syno_package.vars – that value will override the one in run.conf. This will setting survive a package version upgrade.

      Reply
      1. Samuel Manso (@samukas_m)

        I switched the value to 768 at the moment just to try it out (also turned off all my other packages besides crashplan).
        I’m running an Intel (DS1511+) with only 1GB but I already bought an extra 2GBs, so I’m thinking I’ll give either 1 or 1.5GB to crashplan and the rest to the system. What do you think? Also, for example, to give 1.5GB, should I write 1536 on the the file? I’m thinking that’s the right number.

        Glad to know that I only need to change the value in one file, and that it will survive upgrades :)

      2. Samuel Manso (@samukas_m)

        Definitely worth the upgrade! :D
        I’m now running with 3GB of RAM, changed the value to 1536 on syno_package.vars and at this exact time, “java” is using 900MB of resources. Of course 1GB was not enough :)

      3. Charlie

        Can someone please advise on how to change the values in the .vars file? When I do a vi to it, it appears to be a blank file. I am no command-line whiz, but can “eventually” get around. Just don’t know how to add/ edit this line because I am restarting the service about 3x per day (initial 2.5tb backup on DS1010+ w/ 3GB ram)
        Thanks,
        Charlie-

      4. patters Post author

        It shouldn’t be empty. Are you logged in as ‘root’? It has the same password as your admin account. That file should look something like this:
        ~$ cat /volume1/@appstore/CrashPlan/syno_package.vars
        #uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)
        #you probably only want more than the recommended 512M if you're backing up extremely large volumes of files
        #USR_MAX_HEAP=512M

        LIBFFI_SYMLINK=YES
        MANIFEST_PATH_SET=True
        ~$

      5. Charlie

        Hi Patters,
        I tried ssh and telnet as root and when I do a vi to that file it is empty. I quit without saving and tried to do a cat as you show in your reply and I get a “no file exists…”.
        I know it is running and the JRE6 is installed (although it says stopped with no way to start it). Is something else improperly installed perhaps or could it be in another path?
        If this helps, I was running CP+ then I uninstalled that service and installed the CPpro version. I upgraded my CP account to try and get more throughput. It seems to be better (when it runs).
        Thanks,
        Charlie-

      6. patters Post author

        Have you tried creating that file by running:
        echo USR_MAX_HEAP=768M > /volume1/@appstore/CrashPlan/syno_package.vars

        If that doesn’t work, perhaps your NAS isn’t using /volume1 for its appstore. Try:
        ls /

      7. Charlie

        Patters,
        Well, that yields a directory doesn’t exist:

        DiskStation> echo USR_MAX_HEAP=768M > /volume1/@appstore/CrashPlan/syno_package.
        vars
        -ash: can’t create /volume1/@appstore/CrashPlan/syno_package.vars: nonexistent directory
        DiskStation> ls /
        bin initrd lost+found sbin var
        dev lib mnt sys var.defaults
        etc lib64 proc tmp volume1
        etc.defaults linuxrc root usr
        DiskStation> cd ..
        DiskStation> ls
        bin initrd lost+found sbin var
        dev lib mnt sys var.defaults
        etc lib64 proc tmp volume1
        etc.defaults linuxrc root usr
        DiskStation> cd /volume1
        DiskStation> ls
        @afpd.core @postfix DataFiles aquota.user music
        @appstore @spool Documents crashplan photo
        @autoupdate @tmp Photos downloads public
        @database ATV1 Time Machine homes video
        @eaDir ATV2 aquota.group iTunes
        DiskStation>

        But, I cannot cd to @appstore – it says not found, so I don’t know if the CrashPlan directory is beneath it.

        Thanks,
        Charlie-

      8. cfpsystems

        FIGURED IT OUT!!
        Had a buddy look at it and (disclaimer that I am not a command line wiz), found that I was trying to change directory with a “/” in front on @appstore.
        Once in, found that the file I needed to update was under the directory CrashplanPro. Heap size adjusted and enabled – will report back after its run a while.
        Thanks,
        Charlie-

      9. patters Post author

        Glad you got it figured out. I forgot that you had mentioned it was CP Pro, and I have been quite slow to get back to you – been very busy in real life.

  28. Adam

    Great package, thanks for putting it together! I’ve run across an issue where after uninstalling the package from my DS1511 using package center, the package is no longer listed in the repository for installation. Any ideas how to make it reappear?! (This is not the window size bug in DSM 3.2, I only see three packages to install)

    Reply
    1. patters Post author

      Sorry, I had just updated the package and my NAS went a bit weird earlier on (no Java apps would start properly). I took the new packages down just in case. However removing Java, rebooting and re-installing Java fixed it so they’re back on the repo now.

      Reply
  29. Scott

    Thanks for the package, I was looking at going through your link to try out CrashPlan Pro w/t my 1812+ but when i followed your instructions to add your package to the package center but the CP Server is showing version 3.2? Is this upgradeable to the new v4.0?

    Reply
      1. Scott

        Sorry, maybe I was mixing up a PROe e-mail I got a work, 3.2 looks like the current server version for the ‘business’ hosted version I was looking to install on my synology at home.

  30. Pingback: Ingmar Verheij – The dutch IT guy » Backup Synology to CrashPlan Pro (on Dutch server at Pro Backup) » Ingmar Verheij - The dutch IT guy

  31. Jemima

    Hi Patters – This is great, and I hope I can get it to work. I’ve installed Java 7 per your instructions, but the Package still gives me a ‘Java is not installed or correctly configured’ error. I’ve tried rebooting the NAS, but still no joy. I’m running as DS209. Can you offer any pointers?

    Reply
    1. Jemima

      Some more info… I am not convinced Java is running correctly – I followed the instructions and the package install completed, but I have the following in the log window of the package:

      /var/packages/java7/scripts/postinst: line 28: java: not found

      Systems installed locales:
      C
      en_US.utf8
      POSIX

      JAVA_HOME=/volume1/@appstore/java7/jre
      TZ=Europe/Brussels

      Reply
    2. Jemima

      Yup, forgive me. Java 7 not working for some reason, but I’ve now installed Java 6 without issue and package now installs :-)

      Reply
  32. John K

    So my DS411J clearly isn’t up to the task of running CrashPlan. I’m curious what models other people have had success with ( I will probably need to eventually handle at least 2TB+ of data). I am looking at the DS411+II which comes with 1 gig of RAM, it’s $200 more to step up to a model that has an extra RAM slot and more than I really want to put into this, just for my personal needs.

    Reply
    1. Joe

      Hi John,

      I’m happily running Crashplan on a 6TB DS411J. I’ve had to tweak the memory usage (run.conf) to 1024M. As you know, the DS411J has only 128M of physical memory so there’s a performance hit with paging.

      Reply
  33. Pingback: Anonymous

  34. Pingback: Synology DiskStation und CrashPlan | Alexander Benker

  35. Pingback: Confluence: Knowledge Base

  36. Ulrik

    Problem: CrashPlan package indicated an update was available, and I let it upgrade. It failed when downloading the installer from Code42 website, and the result was, that the package was now removed! So I downloaded it manually and placed the installer in the \public share and then installed the package again. This time it installed with success, BUT when I then connected the CrashPlan client, the Synology was listed as a new CrashPlan engine (with green indicator) and the old engine (with same name) had a grey indicator button. Obviously my CrashPlan license was also gone. It now says I will loose all of my backups, if I transfer the license to the new CrashPlan engine… what to do???

    Reply
    1. patters Post author

      You will be able to ‘adopt’ the old computer record which will recover the licence and the existing synced backup data. Look at the link about adoption in the notes section of my blog post.

      Reply
  37. Chris

    Can people please post what Syno they have, how much data they are backing up and if they would recommend that setup?

    I am currently trying to decide between the ds212 and the ds212+ for my house. I have the ds1511+ running proe at work and love it. Thanks patters!

    Reply
    1. DS411j

      I have a DS411j and I would not recommend it for large backup sets. I have 1.5TB in about 200K files. It is working thanks to the amazing work by patters, but I have to acknowledge that it is a bit slow. First it is taking about 2 days to scan the files (and rescan happens on a regular basis). Second the upload speed is about 5 to 6GB/day, while it was about 10 to 12GB/day on my computer.

      I guess it would work very well for small backup sets (<100GB), but for large sets I would definitely recommend a more robust version (the xxx+ series)

      Reply
    2. Dave

      I have a DS110J and am backing up about 30Gb without any problems. It is quite slow but I’m guess that is crashplan related? Uploading at about 300kb/s – 1000Mb/s on a 10Mb/s upload connection. This script is awesome, I bought my NAS for £80, 2Tb HDD for £85 and 4 year Crashplan subscription for £90, all in all this is just what I need to backup my critical data. Not sure it would be so great for backing up Tb’s of data but for the majority it’s perfect.

      Thanks Patters.

      Reply
    3. Ulrik

      I have a DS1812, and trying to backup almost 8TB of data. As the transfer in average is about 0,5Mbit/sec it will take 5-7 years before the backup is finished. Maybe it is time for me to find another backup solution…. ;-)

      Reply
  38. kloveland

    I just purchased a 1511+ and attempted to run the package. Everything looks like it installed, but I cannot connect to it with the desktop client.

    The only two clues I have are:

    1) When I run netstat I receive the following results.

    DiskStation> netstat -an | grep ‘:424.’
    tcp 0 0 127.0.0.1:4243 0.0.0.0:* LISTEN

    2) When I look at the engine_error.log in /opt/crashplan/log, I see the following:

    java.lang.UnsatisfiedLinkError: /opt/crashplan/libmd564.so: /opt/crashplan/libmd564.so: ELF file OS ABI invalid
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(Unknown Source)

    Both of these don’t seem right, but I don’t know what to investigate from here. Any hints?

    Reply
    1. patters Post author

      No one reported that error yet, but that doesn’t necessarily mean it they’re not getting it. I don’t have an Intel Synology so I can’t troubleshoot that myself I’m afraid. From what I remember when CrashPlan 3.2 first introduced libmd5 it ought to fall back to using the Java MD5 function if it’s not present so CrashPlan should still work for you.

      Reply
      1. kloveland

        Your package says I need to stop and restart the service, but I needed to reboot the box. Initially I had tried to install your package and when it did not work, I tried the manual steps. This fouled things up further. I tried uninstalling everything and re-installing, but no avail. Since this was a new DiskStation, I reinstalled the OS and ONLY installed your package. Restarting the service still did not solve it, but I restarted the box and it ran great.

        For whatever reason, it did not seem to run the post-install script when I simply stopped and started the service.

  39. pitcher

    I am using this great package on a DS209. Nice work!
    I have one question.
    My diskstation is segmented with security groups.
    When i want to select the folders to backup some folders are not visible due the security-settings.
    When i add the “group” users or add “others” read/write permissions the folder is visible to backup.
    Which specific user needs to have permissions on this folder?

    How to do this?

    Thx

    Reply
    1. patters Post author

      I’m not sure – on mine the crashplan user seems to have read access to everything, because it’s browsing stuff on the local filesystem, not via network shares. It seems to be the way the security is set up on Synology.

      Reply
  40. pitcher

    Hello,
    when running livedrive from a computer there will be a log-file for changed files in the following directory : c:\ProgramData\CrashPlan\log\backup_files.log.0

    Is there a logfile available on the synology and where?

    Thx

    Reply
    1. patters Post author

      The log file I present in the Package Center UI is the history log. You can find all the other logs in /volume1/@appstore/CrashPlan/log.

      Reply
  41. h-c

    Hello!

    I love your great package on my DS212+ !!! :)
    But since a few days it uses 100% of the CPU (although I configured 30%) und restarts very often (every 5-30 minutes) …
    I didn’t change anything … so I can’t explain why this happens …
    Any ideas?

    Thank you very much!

    Reply
      1. h-c

        my selection ist ca. 850 GB. And I uploaded 836,9 GB allready without problems.
        (The description above says that 500 MB of RAM are ok for 2 TB-Backups)
        I reinstalled Crashplan on my DiskStation yesterday and connected to my Backup … but that did not work …

      2. h-c

        And this reinstall seemed to have deleted my 836,9 GBs … :(((
        And now he says “initial backup not complete” / “Space used: 0MB”

      3. patters Post author

        You upgraded from what? A previous version of my package? Or a manual install? You should have been able to adopt the old backup records regardless. If it wiped your data, you must have answered the first question in the CrashPlan GUI wrong (the “Is this a new computer” one).

  42. Ryan

    I just installed Java embedded onto my own personal 1511+ after installing it on a client and setting up their CrashPlan Pro E client successfully however I cannot get java to even start on my 1511+. Java is version 1.6.0_32-009.

    Reply
      1. Ryan

        Okay, fair enough. I missed that part on the 1512+ I set up for the client. However, I cannot get CrashPlan ProE to start on my 1511+ and it is running on my client’s 1512+. It just says “Stopped” and when I click start nothing seems to happen. I cannot connect to it remotely either.

      2. Ryan

        More info here, it appears the CP ProE installer is having an issue with the start-stop-status script at the end of the install. I don’t get a log folder in the crashplan pro E app when I click on More info and then log. btw, this is running v010

      3. Ryan

        Even more info, I completely reformatted the 1511+ back to factory defaults and still get the same issue with only Java and CrashPlan ProE client installed.

      4. MD Sharma (@drmdsharma)

        Patters, Kudos for this work mate. On a DS1512+ Java SE for embedded v6 installs fine as per your instructions and the package install for 3.2.1-010 executes without errors however the CrashPlanEngine won’t start at all.

        Here is an ls of /volume1/@appstore/CrashPlanPROe/

        drwxr-xr-x 3 crashpla root 4096 Jul 7 17:14 .
        drwxr-xr-x 6 root root 4096 Jul 7 17:14 ..
        drwxr-xr-x 2 crashpla root 4096 Jul 7 17:14 bin
        -rw-r–r– 1 crashpla root 239 Jul 7 17:14 install.vars
        -rw-rw-rw- 1 crashpla root 5702 Apr 29 12:30 lib
        -rw-r–r– 1 crashpla root 217 Jul 7 17:14 syno_package.vars

        going into bin and executing the engine gives this error message:
        ./CrashPlanEngine
        Could not find JAR file ./../lib/com.backup42.desktop.jar

        and clearly the JAR file does not really exist as the lib entry in the earlier folder is a file not a folder in itself..

        any tips?

  43. MD Sharma (@drmdsharma)

    Yo Patters buddy..

    RE my earlier message.. did a bit more testing.. turns out that the cpi file not unzipping is a big part of this puzzle.. if I unzip it manually then the CrashPlanEngine can be started manually.. and it even connects to the backup service etc.. but the joy does not last for long.. the “package” cannot be started or stopped from the synology GUI controls.. something to do with the start-stop script perhaps..

    anyway.. can you help fiddle with this buggy install for the PROe client install? the regular and PRO installs work fine..

    Reply
    1. patters Post author

      Code 42 had incremented the version of the CPI package inside (from 3.2.1 to 3.2.1.2) since I had last released a syno version. I have amended the package script to use a wildcard on the CPI archive name now so this sort of thing won’t cause a disruption in future.

      Reply
      1. Ryan

        Just wanted to let you know that the 011 version seems to have fixed my issue above as well. Will donate in a few. Thanks for your time! Your support is much better than Code 42’s own support.

  44. Joe

    Great work. Quick question. After you install the package, can you delete the installer file from public? Or do you have to leave it there?

    Thanks

    Reply

Leave a comment