CrashPlan packages for Synology NAS

UPDATE – The instructions and notes on this page apply to all three versions of the package hosted on my repo: CrashPlan, CrashPlan PRO, and CrashPlan PROe.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient – it could even get stolen or destroyed and you would still have your data. Whilst you can pay a small monthly charge for a storage allocation in the Cloud, one neat feature CrashPlan offers is for individuals to collaboratively backup their important data to each other – for free! You could install CrashPlan on your laptop and have it continuously protecting your documents to your NAS, even whilst away from home.

CrashPlan-Windows

CrashPlan is a Java application, and one that’s typically difficult to install on a NAS – therefore an obvious candidate for me to simplify into a package, given that I’ve made a few others. I tried and failed a few months ago, getting stuck at compiling the Jtux library for ARM CPUs (the Oracle Java for Embedded doesn’t come with any headers).

I noticed a few CrashPlan setup guides linking to my Java package, and decided to try again based on these: Kenneth Larsen’s blog post, the Vincesoft blog article for installing on ARM processor Iomega NAS units, and this handy PDF document which is a digest of all of them, complete with download links for the additional compiled ARM libraries. I used the PowerPC binaries Christophe had compiled on his chreggy.fr blog, so thanks go to him. I wanted make sure the package didn’t require the NAS to be bootstrapped, so I picked out the few generic binaries that were needed (bash, nice and cpio) directly from the Optware repo.

UPDATE – For version 3.2 I also had to identify and then figure out how to compile Tim Macinta’s fast MD5 library, to fix the supplied libmd5.so on ARM systems (CrashPlan only distributes libraries for x86). I’m documenting that process here in case more libs are required in future versions. I identified it from the error message in log/engine_error.log and by running objdump -x libmd5.so. I could see that the same Java_com_twmacinta_util_MD5_Transform_1native function mentioned in the error was present in the x86 lib but not in my compiled libmd5.so from W3C Libwww. I took the headers from an install of OpenJDK on a regular Ubuntu desktop. I then used the Linux x86 source from the download bundle on Tim’s website – the closest match – and compiled it directly on the syno using the command line from a comment in another version of that source:
gcc -O3 -shared -I/tmp/jdk_headers/include /tmp/fast-md5/src/lib/arch/linux_x86/MD5.c -o libmd5.so

Aside from the challenges of getting the library dependencies fixed for ARM and QorIQ PowerPC systems, there was also the matter of compliance – Code 42 Software’s EULA prohibits redistribution of their work. I had to make the syno package download CrashPlan for Linux (after the end user agrees their EULA), then I had to write my own script to extract this archive and mimic their installer, since their installer is interactive. It took a lot of slow testing, but I managed it!

CPPROe package info

My most recent package version introduces handling of the automatic updates which Code 42 sometimes publish to the clients. This has proved to be quite a challenge to get working as testing was very laborious. I can confirm that it worked with the update from CrashPlan PRO 3.2 to 3.2.1 , and from CrashPlan 3.2.1 to 3.4.1:

CrashPlan-update-repair

 

Installation

  • This package is for Marvell Kirkwood, Marvell Armada 370/XP, Intel and Freescale QorIQ/PowerQUICC PowerPC CPUs only, so please check which CPU your NAS has. It will work on an unmodified NAS, no hacking or bootstrapping required. It will only work on older PowerQUICC PowerPC models that are running DSM 5.0. It is technically possible to run CrashPlan on older DSM versions, but it requires chroot-ing to a Debian install. Christophe from chreggy.fr has recently released packages to automate this.
  • In the User Control Panel in DSM, enable the User Homes service.
  • Install the package directly from Package Center in DSM. In Settings -> Package Sources add my package repository URL which is http://packages.pcloadletter.co.uk.
  • You will need to install either one of my Java SE Embedded packages first (Java 6 or 7). Read the instructions on that page carefully too.
  • If you previously installed CrashPlan manually using the Synology Wiki, you can find uninstall instructions here.
 

Notes

  • The package downloads the CrashPlan installer directly from Code 42 Software, following acceptance of their EULA. I am complying with their wish that no one redistributes it.
  • CrashPlan is installed in headless mode – backup engine only. This is configured by a desktop client, but operates independently of it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up very large backup sets by editing /volume1/@appstore/CrashPlan/syno_package.vars. If you’re considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the Intel models which come with 1GB RAM and can be upgraded to 3GB very cheaply. RAM is very limited on the ARM ones. 128MB RAM on the J series means CrashPlan is running with only one fifth of the recommended heap size, so I doubt it’s viable for backing up very much at all. My DS111 has 256MB of RAM and currently backs up around 60GB with no issues. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB.
  • As with my other syno packages, the daemon user account password is randomized when it is created using the openssl binary. DSM Package Center runs as the root user so my script starts the package using an su command. This means that you can change the password yourself and CrashPlan will still work.
  • The default location for saving friends’ backups is set to /volume1/crashplan/backupArchives (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • The first time you run the server you will need to stop it and restart it before you can connect the client. This is because a config file that’s only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
  • Once the engine is running, you can manage it by installing CrashPlan on another computer, and editing the file conf/ui.properties on that computer so that this line:
    #serviceHost=127.0.0.1
    is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
    serviceHost=192.168.1.210
    On Windows you can also disable the CrashPlan service if you will only use the client.
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you’re trying to troubleshoot an issue you will need to use an SSH session to inspect the two engine log files which are:
    /volume1/@appstore/CrashPlan/log/engine_output.log
    /volume1/@appstore/CrashPlan/log/engine_error.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). You will need to wait several minutes in the event of this happening before you take action, because the update script tries to restart CrashPlan 10 times at 10 second intervals. After this, you simply start the package again in Package Center and my scripts will fix the update, then run it. One final package restart is required before you can connect with the CrashPlan Desktop client (remember to update that too).
  • After their backup is seeded some users may wish to schedule the CrashPlan engine using cron so that it only runs at certain times. This is particularly useful on ARM systems because CrashPlan currently prevents hibernation while it is running (unresolved issue, reported to Code 42). To schedule, edit /etc/crontab and add the following entries for starting and stopping CrashPlan:
    55 2 * * * root /var/packages/CrashPlan/scripts/start-stop-status start
    0  4 * * * root /var/packages/CrashPlan/scripts/start-stop-status stop

    This example would configure CrashPlan to run daily between 02:55 and 04:00am. CrashPlan by default will scan the whole backup selection for changes at 3:00am so this is ideal. The simplest way to edit crontab if you’re not really confident with Linux is to install Merty’s Config File Editor package, which requires the official Synology Perl package to be installed too (since DSM 4.2). After editing crontab you will need to restart the cron daemon for the changes to take effect:
    /usr/syno/etc.defaults/rc.d/S04crond.sh stop
    /usr/syno/etc.defaults/rc.d/S04crond.sh start

    It is vitally important that you do not improvise your own startup commands or use a different account because this will most likely break the permissions on the config files, causing additional problems. The package scripts are designed to be run as root, and they will in turn invoke the CrashPlan engine using its own dedicated user account.
  • If you update DSM later, you will need to re-install the Java package or else UTF-8 and locale support will be broken by the update.
  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, I would really appreciate it if you could use this affiliate link, or consider donating using the PayPal button on the right.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology Package wiki.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk

DOWNLOAD_PATH="http://download.crashplan.com/installs/linux/install/${SYNOPKG_PKGNAME}"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_3.6.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_3.6.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && DOWNLOAD_FILE="CrashPlanPROe_3.6.3_Linux.tgz"
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_PASS="`openssl rand 12 -base64 2>/dev/null`"
DAEMON_ID="${SYNOPKG_PKGNAME} daemon user"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
ENGINE_SCRIPT="CrashPlanEngine"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tgz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"

source /etc/profile
PUBLIC_FOLDER="`cat /usr/syno/etc/smb.conf | sed -r '/\/public$/!d;s/^.*path=(\/volume[0-9]{1,4}\/public).*$/\1/'`"


preinst ()
{
  if [ -z ${PUBLIC_FOLDER} ]; then
    echo "A shared folder called 'public' could not be found - note this name is case-sensitive. "
    echo "Please create this using the Shared Folder DSM Control Panel and try again."
    exit 1
  fi

  if [ -z ${JAVA_HOME} ]; then
    echo "Java is not installed or not properly configured. JAVA_HOME is not defined. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  if [ ! -f ${JAVA_HOME}/bin/java ]; then
    echo "Java is not installed or not properly configured. The Java binary could not be located. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  #is the User Home service enabled?
  UH_SERVICE=maybe
  synouser --add userhometest Testing123 "User Home test user" 0 "" ""
  UHT_HOMEDIR=`cat /etc/passwd | sed -r '/User Home test user/!d;s/^.*:User Home test user:(.*):.*$/\1/'`
  if echo $UHT_HOMEDIR | grep '/var/services/homes/' > /dev/null; then
    if [ ! -d $UHT_HOMEDIR ]; then
      UH_SERVICE=false
    fi
  fi
  synouser --del userhometest
  #remove home directory (needed since DSM 4.1)
  [ -e /var/services/homes/userhometest ] && rm -r /var/services/homes/userhometest
  if [ "${UH_SERVICE}" == "false" ]; then
    echo "The User Home service is not enabled. Please enable this feature in the User control panel in DSM."
    exit 1
  fi
  
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, "
        echo "which was \"${WGET_URL}\" "
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. "
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  #create daemon user
  synouser --add ${DAEMON_USER} ${DAEMON_PASS} "${DAEMON_ID}" 0 "" ""
  
  #save the daemon user's homedir as variable in that user's profile
  #this is needed because new users seem to inherit a HOME value of /root which they have no permissions for.
  su - ${DAEMON_USER} -s /bin/sh -c "echo export HOME=\'${DAEMON_HOME}\' >> .profile"

  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xzf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 512M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=512M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  #the following Package Center variables will need retrieving if launching CrashPlan via cron
  echo "CRON_SYNOPKG_PKGNAME='${SYNOPKG_PKGNAME}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "CRON_SYNOPKG_PKGDEST='${SYNOPKG_PKGDEST}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/${ENGINE_SCRIPT} ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  chown -R ${DAEMON_USER} ${MANIFEST_FOLDER}
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${EXTRACTED_FOLDER}
  
  #change owner of CrashPlan folder tree
  chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
  
  exit 0
}


preuninst ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi

  if [ "${LIBFFI_SYMLINK}" == "YES" ]; then
    rm /lib/libffi.so.5
  fi
  
  #if it doesn't exist, but is still a link then it's a broken link and should also be deleted
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi
    
  #remove daemon user
  synouser --del ${DAEMON_USER}
  
  #remove daemon user's home directory (needed since DSM 4.1)
  [ -e /var/services/homes/${DAEMON_USER} ] && rm -r /var/services/homes/${DAEMON_USER}
  
 exit 0
}

preupgrade ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  #if identity and config data exists back it up
  if [ -d ${DAEMON_HOME}/.crashplan ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/conf
    mv ${DAEMON_HOME}/.crashplan ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/.crashplan ]; then
    mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/.crashplan ${DAEMON_HOME}
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date +%D` `date +%I:%M%p`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
    
    #daemon user has been deleted and recreated so we need to reset ownership (new UID)
    chown -R ${DAEMON_USER} ${DAEMON_HOME}/.crashplan
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
    
    #read manifest location from the migrated XML config, and reset ownership on that path too
    if [ -f ${SYNOPKG_PKGDEST}/conf/my.service.xml ]; then
      MANIFEST_FOLDER=`cat ${SYNOPKG_PKGDEST}/conf/my.service.xml | grep "<manifestPath>" | cut -f2 -d'>' | cut -f1 -d'<'`
      chown -R ${DAEMON_USER} ${MANIFEST_FOLDER}
    fi
    
    #the following Package Center variables will need retrieving if launching CrashPlan via cron
    grep "^CRON_SYNOPKG_PKGNAME" ${SYNOPKG_PKGDEST}/syno_package.vars > /dev/null \
     || echo "CRON_SYNOPKG_PKGNAME='${SYNOPKG_PKGNAME}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
    grep "^CRON_SYNOPKG_PKGDEST" ${SYNOPKG_PKGDEST}/syno_package.vars > /dev/null \
     || echo "CRON_SYNOPKG_PKGDEST='${SYNOPKG_PKGDEST}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk

if [ "${SYNOPKG_PKGNAME}" == "" ]; then
  #if this script has been invoked by cron then some Package Center vars are undefined
  source "`dirname $0`/../target/syno_package.vars"
  SYNOPKG_PKGNAME="${CRON_SYNOPKG_PKGNAME}" 
  SYNOPKG_PKGDEST="${CRON_SYNOPKG_PKGDEST}"
  CRON_LAUNCHED=True
fi

#Main variables section
DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
ENGINE_SCRIPT="CrashPlanEngine"
APP_NAME="CrashPlanService"
SCRIPTS_TO_EDIT="${ENGINE_SCRIPT}"
ENGINE_CFG="run.conf"
LIBFFI_SO_NAMES="5 6" #armada370 build of libjnidispatch.so is newer, and uses libffi.so.6
CFG_PARAM="SRV_JAVA_OPTS"
source ${OPTDIR}/install.vars

JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"`
SYNO_CPU_ARCH="`uname -m`"


case $1 in
  start)    
    #set the current timezone for Java so that log timestamps are accurate
    #we need to use the modern timezone names so that Java can figure out DST 
    SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
    SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
    grep "^export TZ" ${DAEMON_HOME}/.profile > /dev/null \
     && sed -i "s%^export TZ=.*$%export TZ='${SYNO_TZ}'%" ${DAEMON_HOME}/.profile \
     || echo export TZ=\'${SYNO_TZ}\' >> ${DAEMON_HOME}/.profile
    #this package stores the machine identity in the daemon user home directory
    #so we need to remove any old config data from previous manual installations or startups
    [ -d /var/lib/crashplan ] && rm -r /var/lib/crashplan

    #check persistent variables from syno_package.vars
    USR_MAX_HEAP=0
    if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
      source ${SYNOPKG_PKGDEST}/syno_package.vars
    fi
    USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

    #create or repair libffi symlink if a DSM upgrade has removed it
    for FFI_VER in ${LIBFFI_SO_NAMES}; do 
      if [ -e ${OPTDIR}/lib/libffi.so.${FFI_VER} ]; then
        if [ ! -e /lib/libffi.so.${FFI_VER} ]; then
          #if it doesn't exist, but is still a link then it's a broken link and should be deleted
          [ -L /lib/libffi.so.${FFI_VER} ] && rm /lib/libffi.so.${FFI_VER}
          ln -s ${OPTDIR}/lib/libffi.so.${FFI_VER} /lib/libffi.so.${FFI_VER}
        fi
      fi
    done

    #fix up some of the binary paths and fix some command syntax for busybox 
    #moved this to start-stop-status from installer.sh because Code42 push updates and these
    #new scripts will need this treatment too
    FIND_TARGETS=
    for TARGET in ${SCRIPTS_TO_EDIT}; do
      FIND_TARGETS="${FIND_TARGETS} -o -name ${TARGET}"
    done
    find ${OPTDIR} \( -name \*.sh ${FIND_TARGETS} \) | while IFS="" read -r FILE_TO_EDIT; do
      if [ -e ${FILE_TO_EDIT} ]; then
        #this list of substitutions will probably need expanding as new CrashPlan updates are released
        sed -i "s%^#!/bin/bash%#!${SYNOPKG_PKGDEST}/bin/bash%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)nice -n%\1${SYNOPKG_PKGDEST}/bin/nice -n%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)(/bin/ps|ps) [^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
        sed -i -r "s%\`ps [^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
        sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
        sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
      fi
    done

    #any downloaded upgrade script will usually have failed until the above changes are made so we need to
    #find it and start it, if it exists
    UPGRADE_SCRIPT=`find ${OPTDIR}/upgrade -name "upgrade.sh"`
    if [ -n "${UPGRADE_SCRIPT}" ]; then
      rm ${OPTDIR}/${ENGINE_SCRIPT}.pid
      SCRIPT_HOME=`dirname $UPGRADE_SCRIPT`

      #make CrashPlan log entry
      TIMESTAMP="`date +%D` `date +%I:%M%p`"
      echo "I ${TIMESTAMP} Synology repairing upgrade in ${SCRIPT_HOME}" >> ${LOG_FILE}

      mv ${SCRIPT_HOME}/upgrade.log ${SCRIPT_HOME}/upgrade.log.old
      chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
      su - ${DAEMON_USER} -s /bin/sh -c "cd ${SCRIPT_HOME} ; . upgrade.sh"
      mv ${SCRIPT_HOME}/upgrade.sh ${SCRIPT_HOME}/upgrade.sh.old
      exit 0
    fi

    #updates may also overwrite our native binaries
    if [ "${SYNO_CPU_ARCH}" == "x86_64" ]; then
      cp ${SYNOPKG_PKGDEST}/bin/synology-x86-glibc-2.4-shim.so ${OPTDIR}/lib
    else    
      cp -f ${SYNOPKG_PKGDEST}/bin/libjtux.so ${OPTDIR}
      cp -f ${SYNOPKG_PKGDEST}/bin/jna-3.2.5.jar ${OPTDIR}/lib
      cp -f ${SYNOPKG_PKGDEST}/bin/libffi.so.* ${OPTDIR}/lib
    fi

    #set appropriate Java max heap size
    RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
    if [ $RAM -le 128 ]; then
      JAVA_MAX_HEAP=80
    elif [ $RAM -le 256 ]; then
      JAVA_MAX_HEAP=192
    elif [ $RAM -le 512 ]; then
      JAVA_MAX_HEAP=384
    #CrashPlan's default max heap is 512MB
    elif [ $RAM -gt 512 ]; then
      JAVA_MAX_HEAP=512
    fi
    if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
      JAVA_MAX_HEAP=${USR_MAX_HEAP}
    fi   
    if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
      #can't have a max heap lower than min heap (ARM low RAM systems)
      $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
    fi
    sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
    
    #disable the use of the x86-optimized external Fast MD5 library if running on ARM and QorIQ CPUs
    #seems to be the default behaviour now but that may change again
    if [ "${SYNO_CPU_ARCH}" != "x86_64" ]; then
      grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
       || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
    fi

    #move the Java temp directory from the default of /tmp
    grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

    #reset ownership of all files to daemon user, so that manual edits to config files won't cause problems
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
    chown -R ${DAEMON_USER} ${DAEMON_HOME}    

    #now edit the XML config file, which only exists after first run
    if [ -f ${SYNOPKG_PKGDEST}/conf/my.service.xml ]; then

      #allow direct connections from CrashPlan Desktop client on remote systems
      #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
      #users report that this value is sometimes reset so now it's set every service startup 
      sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
      
      #this change is made only once in case you want to customize the friends' backup location
      if [ "${MANIFEST_PATH_SET}" != "True" ]; then

        #keep friends' backup data outside the application folder to make accidental deletion less likely 
        sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
        echo "MANIFEST_PATH_SET=True" >> ${SYNOPKG_PKGDEST}/syno_package.vars
      fi

      #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
      sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
    else
      echo "Wait a few seconds, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
    fi
    if [ "${CRON_LAUNCHED}" == "True" ]; then
      [ -e /var/packages/${SYNOPKG_PKGNAME}/enabled ] || touch /var/packages/${SYNOPKG_PKGNAME}/enabled
    fi

    #delete any stray Java temp files
    find /tmp -name "jna*.tmp" -user ${DAEMON_USER} | while IFS="" read -r FILE_TO_DEL; do
      if [ -e ${FILE_TO_DEL} ]; then
        rm ${FILE_TO_DEL}
      fi
    done

    #increase the system-wide maximum number of open files from Synology default of 24466
    echo "65536" > /proc/sys/fs/file-max

    #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
    #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
    ulimit -n 65536

    if [ "${SYNO_CPU_ARCH}" == "x86_64" ]; then
      #Intel synos running older DSM need rwojo's glibc version shim for inotify support
      #https://github.com/wojo/synology-x86-glibc-2.4-shim
      GLIBC_VER="`/lib/libc.so.6 | grep -m 1 version | sed -r "s/^[^0-9]*([0-9].*[0-9])\,.*$/\1/"`"
      if [ "${GLIBC_VER}" == "2.3.6" ]; then
        su - ${DAEMON_USER} -s /bin/sh -c "LD_PRELOAD=${SYNOPKG_PKGDEST}/lib/synology-x86-glibc-2.4-shim.so ${OPTDIR}/bin/${ENGINE_SCRIPT} start"
      else
        su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} start"
      fi
    else
      su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} start"
    fi
    exit 0
  ;;

  stop)
    su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
    if [ "${CRON_LAUNCHED}" == "True" ]; then
      [ -e /var/packages/${SYNOPKG_PKGNAME}/enabled ] && rm /var/packages/${SYNOPKG_PKGNAME}/enabled
    fi
    exit 0
  ;;

  status)
    PID=`/bin/ps w| grep "app=${APP_NAME}" | grep -v grep | awk '{ print $1 }'`
    if [ -n "$PID" ]; then
      exit 0
    else
      exit 1
    fi
  ;;

  log)
    echo "${LOG_FILE}"
    exit 0
  ;;
esac
 

Changelog:

  • 0027 Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 Updated CrashPlan to version 3.5.2
  • 0020 Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 intial public release
 
 
About these ads

2,576 thoughts on “CrashPlan packages for Synology NAS

    1. olije

      Forget about Crashplan in combination with the Synology. I am moving over to iDrive, which is even cheaper than CP up to 1 TB and that is enough for me. I am fed up with spending so much time on this issue and this forum which seems to be totally forgotten by its creator, Patters.

      Reply
      1. B. Goodman

        Good theory. I subscribed to iDrive until I found out they were installing a 10 year-old vulnerable DLL onto my system to support their software. At least that was true as of early 2013. If I recall correctly, they also store your password in a simple text file on your computer with a 2-letter substitution (A becomes C, B becomes D).

        All I’m saying is that you should consider security when you’re choosing any “cloud” solution.

      2. Marcus

        I beg to differ: I’m grateful for the work Patters is doing on this package, but he cannot do support on this. He can offer some guidance on issues, but it is up to the community to do the support since CP will not help us.

        On a side note: I have used iDrive a long time ago and moved away from it since then it didn’t offer dedupe. So if you moved or renamed a file you had to upload it again. This might have changed, but you should check!

      3. B. Goodman

        Good theory. I subscribed to iDrive until I found out they were installing a 10 year-old vulnerable DLL onto my system to support their software. At least that was true as of early 2013. If I recall correctly, they also store your password in a simple text file on your computer with a 2-letter substitution (A becomes C, B becomes D).

        All I’m saying is that you should consider security when you’re choosing any “cloud” solution.

      4. Flavio Endo

        Fully agree with Marcus.
        Patters did that for the community and have no obligation to solve each one issues. He among the community try to help each other as much as each can.
        By the way, he invests a lot of time on this and get some cash donated by the community. Have you done that already?

        Thanks!

      5. olije

        Hi Flavio,

        Yes, as stated in an earlier post, I actually made a donation. Obviously I am not attacking Patters, I am only stating an observation and concluding that indeed support is very limited if not non-existent. And of course it is not fair to expect that from a single person like Patters, I fully agree and understand that. However, all of this does lead me to the conclusion that CP in combination with Synology is not a reliable solution, exactly because it is not supported properly.

        BTW, thanks guys for the feedback on iDrive, I will definitely check it out before I invest in that. Ultimately I would love to stay with this CP solution if only it would become reliable. But there is no one (and myself the least!) who has sufficient knowledge to solve the many problems and their root cause(s).

  1. OecherWolke

    one thing patters could do and this would help anybody a lot, if he would use a different software for the website. A kind of “Forum”-template would be great, because this endless thread isn’t a good help. Some users could create som “how tos” and this way would help other users a lot.

    Another thing, olije… we are doing our business with CrashPlanPROe and use it a lot on different Synology systems. We help our customers in using that package and we found a solution for all errors we were faced in the last 1-2 years.
    That is the difference, between using CrashPlan with their 4$-package against a professional service. ;)
    But why don’t You send your CrashPlan logs to CrashPlan? This is a Linux machine (the Synology) so the software should work. Because it is a slow Linux machine with not so much RAM You need to make some compromises (smaller backup jobs), but CrashPlan (the company) should provide some help, what the reason of Your problems is.
    We have customers saving about 10TB with DS1812+ with 4GB of RAM, we are talking about ≈ 2 Mio. files, this is running fine and reliable.

    Reply
    1. bertxl

      I think that indeed a lot of people are indeed looking at the wrong place for their support. Most of the issues are caused by the underlying CP linux application and not by the synology package. The support should come from the CP helpdesk instead from this forum thread. I have already contacted CP helpdesk for problems and if you explain the situation as a linux machine, they will help you in configuring and optimizing your NAS to run without issues.
      At the end the support guy did told me that my configuration with the headless package is not really supported by CP but he did solve my issue.

      Reply
      1. bertxl

        olije, I just said you should contact code42 support to solve your issues, not copy what I do :-)

        I have a a DS1513+ with DSM5.0-4493. I backup 226K files / 950GB
        The issue I had was that it stopped backing up because the number of files and size was too big. I already had incread the memory of the NAS from 2 to 4GB but CP support told me I also had to increase the memory assigned to the CP process. What I did was:
        start a ssh as root
        cd /volume1/@appstore/CrashPlan
        vi syno_package.vars
        Update a line to USR_MAX_HEAP=1024M

        I’ve also contacted Code42 support for another issue I had, when CP kept on complaining that certain windows files (thumbs.db) were locked and the backup did not continue. They could solve this after I sent them my logfiles.

      2. John B.

        I’ve excluded those types of files (Thumbs.db, desktop.ini, @eaDir, #recycle, .crdownload).
        Did they solve the problem by excluding the files, or did they have other methods?

      3. bertxl

        I just deleted the files that caused the errors and that solved the problem. But they helped me finding which were the files that caused the issue by analysing the log files.

    2. olije

      Hi OecherWolke,
      Thanks for your elaborate replies, they are helpful. Reading your last post it dawned on me that my biggest frustration is actually not knowing what will work and what won’t. Obviously my current DS213+ is not good enough to run CP. I have split my backup sets as stated below, which shows that imo I do not have exceptional backup sets. Perhaps it has something to do with the number of files in one directory. But then I am again merely guessing, without understanding it fundamentally.

      GB #files Comments
      1 2,4 3127 no issues
      2 17,1 7527 hangs
      3 337,3 28231 no issues
      4 35,2 1440 hangs (iPhoto library)
      5 2,6 24 no issues
      6 304,5 856 no issues
      7 22,4 22876 no issues
      8 1 118 no issues
      9 78,8 7193 no issues
      10 64,7 15317 hangs (timemachine backups)
      Tot 866 86709

      So, the big question is, what hardware should I buy to be able to use CP Home? The CP Pro I find too expensive to consider for home use. I am willing to invest in hardware but will an upgrade to e.g. DS214 play (Intel Atom, 1 GB memory – non expandable) be sufficient?

      Cheers.

      Reply
      1. davidgelb

        olije, i would still say to stay with the DS you have and get a Mac Mini to handle the CP backup. So much easier and more reliable.

      2. olije

        Hi David. Yes, I read your mail and appreciate your enthusiasm for the Mac Mini solution. However, it seems like such an expensive and redundant solution. I would rather have a new DS (and sell the old one) that can do the job, than having to buy expensive new (or almost new) hardware that seems highly over-qualified for such a simple job. Hence my hesitation to go this way…

      3. John B.

        If I were you, then I’d remove the “no issues” for now and split one of the “hangs” into even smaller parts. Eventually you’d find your problem, or perhaps notice that CP suddenly completes without issues (e.g. the split reduced memory requirements).

        In any case, I have had the normal version of CP running “forever” without problems on an old DS411+II (Intel Atom D525 Dualcore (2C/4T) 1.8GHz x86 Processor 64-bit@DDR2, 1GB of RAM according to http://forum.synology.com/wiki/index.php/What_kind_of_CPU_does_my_NAS_have).
        I did upgrade it to 2GB, but that was more to prevent any potential memory issues; not to solve any that I had. Just watching DSM, it’s currently backing up with CPU=3/27% (backing up vs analyzing) and RAM=28%.

        I have 6 sets with compression=off, encryption=on, watch=off, open=on where de-dup changes per set:
        * 25,735 files (80.5 GB)
        * 5,383 files (33.6 GB)
        * 292 files (8.9 GB)
        * 363 files (1.30 TB)
        * 430 files (740.7 TB)
        * 1,861 files (589.8 GB)

      4. olije

        Hi John,

        Seems like your suggestion worked for now. I removed the Timemachine backup set, which consisted of 8k files in one single directory. That seemed to choke CP, leaving me to conclude that it is especially a file number issue rather than a file size issue, although it might also be a combination of the two…

      5. John B.

        It may be a combination, as you can see from my bacup set-list that the first set is 25k files, but not particularly large files, and the and the fourth set is over 1 TB, but a small number of files.
        Perhaps it’s possible for you to re-arrange the files to satisfy CP, although a combination that works today may then not work tomorrow. That would be a shame.
        I may be lucky with my sets, as CP is rock solid on my setup, and has been for the longest time.

      6. John B.

        Perhaps there is a conflict, so CP should be scheduled to perform backups of this folder when Timemachine is *not* using it? Perhaps “Watch file system in real-time” and “Back up open files” should both be off in this case? Experiment.
        Also, you may want to try turning off compression, as this will – crucially – reduce memory consumption by just using more bandwidth.

  2. Brad

    Has anyone taken the step to upgrade to DSM 5.1 Beta, and if so does this CrashPlan package still work after the update? I’d like to try the Beta but can’t afford to have CrashPlan break and don’t have time to debug things if it does. FWIW I’m on a DS412+ upgraded to 2GB of RAM. I’m currently on the latest version of DSM 5.0 (5.0-4493 Update 5).

    Reply
  3. Cosmicmartini

    I share some of your concern regarding the viability of this CP/Synology hack that Patters has been so diligently working on.

    DSM 5.1 Beta has been issued and now offers backup links to Microsoft Azure and Amazon Glacier embedded into the DSM software.
    Does anyone have experience with any of these backup services?

    Thanks.

    Reply
    1. Fred

      Hi,

      Well, I had a look at the prices and unfortunately CP is still the cheapest, and by far!

      Both Glacier and Azure are per Gb per month, and Glacier is ~ 0.01 USD per Gb/month and Azure 0.023.
      So as soon as you have > 1GB to backup, it becomes much more expensive than CP :(

      Which is still buggy as ever on my DS214play. Keep on scanning and doing nothing, then losing the backups and re-sending everything… hard drives constantly working for nothing :'(

      Reply
  4. Richard

    Personally I think Crashplan is being shortsighted. If I were them I’m get in contact with Patters and hire him to keep the package updated. Personally I think if they did that and put out a official headless package for the Synology they would even adjust the cost. For a headless client I could see three tier’s $ for 100GB, $$ for 100-1000GB and $$$ for 1TB+

    I’m personally laying off DSM updates to ensure that I’ve got a working Crashplan installation. Right now that’s more important to me then being on the latest and greatest.

    Reply
  5. RAJ

    Despite my better judgment , I decided to Update from DSM 5.0-4482. My setup was as follows:

    Synology DS1511+ (upgraded to 3GB)
    5 Drives (2-1TB drives, 2-2TB drives, 1-3TB drive)
    DSM 5.0-4482
    Patter’s CrashPlan Version 3.6.3-0027
    Patter’s Java SE Embedded 8 Version 1.8.0_0132-0023
    CrashPlan + 3 Version 3.6.3

    MOD: syno_package.vars – changed “#USR_MAX_HEAP=512M” to “USR_MAX_HEAP=2560M”

    BackupSet 1 : 1,254 – 1.2 G
    BackupSet 2 : 6,172 – 5.9 G
    BackupSet 3 : 5,819 – 8.3 G
    BackupSet 4 : 4,909 – 10.8 G
    BackupSet 5 : 3,668 – 11.1 G
    BackupSet 6 : 6,225 – 50.6 G
    BackupSet 7 : 6,363 – 43.5 G
    BackupSet 8 : 6,608 – 51.6 G
    BackupSet 9 : 43,688 – 263.8 G
    BackupSet 10: 57,171 – 332.5 G
    BackupSet 11: 71,275 – 433.0 G
    BackupSet 12: 85,611 – 458.2 G
    BackupSet 13: 89,630 – 501.1 G
    BackupSet 14: 48,127 – 273.3 G
    BackupSet 15: 1,007 – 202.2 G
    BackupSet 16:197,763 – 277.5 G
    BackupSet 17:159,886 – 124.3 G

    Up until yesterday… backups were rock stable.

    Decided to upgrade to latest DSM 5.0-4493 Update 5.

    Long story short… after trial and error with version of Java 7 vs 8, adoption, many rebooting of the server… looks like I’m back to a stable state with one peculiar exception.

    First my current setup:

    DSM 5.0-4493 Update 5
    Patter’s CrashPlan Version 3.6.3-0027
    Patter’s Java SE Embedded 7 Version 1.7.0_6-0027
    CrashPlan + 3 Version 3.6.3

    Here’s the peculiar part. Things look stable, when I go to my PC and I click onto Crashplan GUI which is directed to my headless DS1511+… everything pops up fine, statuses are fine, backups seem to be working… everything looks peachy.

    However, when I go to the Package Center on my DS1511+, the status of the CrashPlan app is “stopped”. But I know it is actually running based on the GUI, plus the fact that my memory usage is pegged at 85-90% used (which was normal for me), and the fact that I could follow new files being backup up to CrashPlan via my mobile app.

    Don’t know why the Package Center thinks the CrashPlan is “stopped”, when all indications are to the contrary.

    Just thought I’d share… good luck all.

    Reply
  6. Serg

    To OecherWolke: Thank you. I submitted a request to Code42 support, as you suggested, and they wrote just the same as you did (probably some problem with righs), but nothing specific.

    Finally, I got tired with all this. Today I have reset my Diskstationto factory settings, restored data and some settings (but not users) from local backup, and now my Crashplan is working again. It can seem as a bit too much for restoring one program, but, in fact, I spent less time for Syno resetting and restoring, than what I spent trying to start CP.

    If Patters does not invent something more stable, I will consider another backup provider (iDrive looks good now), when my subscription for Crashplan is over.

    And a big THANK YOU goes to Patters for his great job. When Code42 does not want to support Synology, it is just fantastic to find a way of installing CP in 5 minutes and without knowing anything about Linux.

    Best regards,
    Serg

    Reply
    1. patters Post author

      My package gets CrashPlan up and running for small backup sets given the limited amount of RAM on the platform. This is stated in the blog post.

      I think a lot of you have overambitious expectations of what this can achieve. Backing up multiple terabytes is clearly too much. I tried a to use CrashPlan PROe on a real server for a business requirement and found that it couldn’t seed the backup of 2TB of files without randomly crashing or getting stuck doing maintenance on Windows 2008 R2! I gave up and used another service. My opinion is that CrashPlan itself is the issue – not running on the Synology platform in particular.

      If you think that I am able to work on fixing CrashPlan’s stability – I’m not. I just get the Java application running as a daemon in the way that DSM does things. All the stability issues stem from the software, or rather the lack of physical RAM. Dozens of pages of comments here about precisely which order to do things in, or suspicions that a particular DSM update has broken it are mainly conjecture. My own DS111 has been reliably backing up around 80GB the entire time these discussions have been going on, through each and every DSM upgrade.

      It’s been said many times, but dividing the data down into smaller separate sets on different backup schedules is the only viable mitigation for the problems, since RAM is the issue. Anyway, DSM 5.1 will have sync to OneDrive as a feature which will hopefully be a little less resource-intensive than CrashPlan, and may make it completely redundant.

      Reply
      1. olije

        Hi Patters,

        Very clear answer and I like your remark about conjecture. That is exactly what it is, although all with the best of intentions! But the lack of consistency throughout all the experiences/hardware configs etc.and the lack of ability to reproduce typical errors prove you’re right. And a statement like this is exactly what this discussion needed in my opinion.

        For me, it is clear that this will not be the way forward to backup my 800 GB. Your remark about OneDrive however triggers me. As this is more of a cloud drive solution I wonder how to implement it. I have the 5.1 beta running now but given the way the CloudSync App works, it is only possible to sync one of the root folders with a given service (Google Drive, Dropbox, onedrive, etc.). So how would you suggest to use this service as a backup for the various root folders on a Synology. Would be interesting to hear your expert opinion. Thanks!

        Cheers, Olije

      2. OecherWolke

        Hi patters, you are right, some people think, that Your package is an “official” Installation. No, it is a kind of hack (but a real cool one).
        Synology knows of Your package, but I think they don’t support that officially, because of the limitations of such NAS-systems, they don’t want to cope with the complaints.
        With a small i3-CPU and 4/8GB of RAM (the bigger Synology machines) it could be more interesting. But Synology is more a client backup, than a server backup. Because more and more data is running on servers, also CrashPlan needs to “rethink” their app. The next major update (4.0) will be a binary application, no java.app. This (I think) will give CrashPlan much more power on multicore cpus, so also bigger backups will work. But we don’t know, if it will still be a split application (GUI/service) and if it will run headless (I think I will).
        But regarding the limits… we have customers with 6, 8, 10 TB on a DS1812+ or DS1813+, which is working well, if You have an eye on the limits of the Synology. A “well working” CrashPlan needs some resources (RAM) and power (CPU) to fullfill its job at its best.

  7. Dustin

    Anyone have any thoughts on a constant disconnect from the windows client?

    I’ve removed everything and reinstalled (both syno side and PC side) and nothing can keep this connection working…

    Reply
  8. Bruno DUPUY

    CrashPlan on syno works fine for me now and “survive” to the last DSM 5.0 updates (dsm 5.0 u1, u2 …).
    I had a lot of crashes, CrashPlan stopped and so on. now everything is ok. my recipe when it does not work anymore:
    stop crasplan package
    reboot
    uninstall java package
    reboot
    install java package (i use patters’ java package 6)
    reboot
    restart CrashPlan

    I use 4 backups sets from 110k to 10k files and ~1Gb. 2 destinations (CrashPlan servers and an other network drive). the syno is a target for 3 small friends backups (50Go). The syno is a DS1010+ with 3Gb ram

    Reply
  9. Ryan Carver

    I’d hoped to install this on a 414j, which has the `comcerto2k` architecture. Any chance the package will support that soon?

    Reply
    1. John B.

      I am using the Synology Java Manager, so I had to reinstall jdk-7u67-linux-i586.gz again after the DSM 5.1 upgrade, but it now runs on DSM 5.1 on my DS411+II.

      Read the Known Issues section at the bottom of the page:

      https://www.synology.com/en-global/support/beta_dsm_5_1

      Personally I expected problems with #5 and #6 as I have SynoCommunity packages installed, but I have not experienced any issues so far.

      Reply
  10. jolimp

    Hi,
    i want to change the file : /volume1/@appstore/CrashPlan/syno_package.vars
    or this one to
    SRV_JAVA_OPTS=”-Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms128m -Xmx1536m …

    Which one override the other? are they the same role ?
    Thx

    Reply
  11. James

    Hello. I’m trying to setup Crashplan between two Synology devices (DS411+ and a DS412+) that are on separate networks. I can telnet to port 4242 from both networks (to the opposite synology device) and get the proper response, something like:
    Escape character is ‘^]’.
    cA-18782|com.code42.messaging.security.SecurityProviderReadyMessage¶¢” /G5½h%n±á7>£«hA©

    The backup job just sits at “Waiting for connection”.

    I don’t see anything in the error logs at /volume1/@appstore/CrashPlan/log. I do see some errors in the service.log.0 about trying to connect to reflector.crashplan.com.

    I don’t see it trying to connect to the configured destination at all, but not even sure where I should see this. On both sides, the opposite synology device shows as grey rather than green.

    Any suggestions are appreciated.

    James

    Reply
  12. John

    Has anyone had any luck installing this package on the DS415+ – It’s a new unit, and Synology’s first with quad core Atom processor. The CrashPlan package isn’t showing up in package centre, I suspect because the new architecture isn’t recognised as being compatible? Update to the package required?

    Reply
    1. patters Post author

      Can you log in via SSH and run “uname -a” and reply here with the output? I will need to add this new Synology architecture identifier to the packages. Thanks

      Reply
      1. John

        Hi Patters.

        Sure – here you go;

        Linux nas-lon 3.2.40 #4519 SMP Fri Sep 19 13:43:25 CST 2014 x86_64 GNU/Linux synology_avoton_415+

        Thanks!

      2. patters Post author

        Thanks. Also, so I can update the Synology Which CPU Wiki page, can you run “cat /proc/cpuinfo” as well please? Cheers!

      3. John

        Hi Patters,

        It actually comes out as the following on /proc/cpuinfo;

        model name : Intel(R) Atom(TM) CPU C2538 @ 2.40GH

        I’m still not seeing those packages in the package centre – have tried refreshing etc. Should I be?

        -J

    1. John B.

      @jolimp: as I understand it, the “CrashPlan” package is actually a collection of files in Patters repository. Each file is linked to a specific NAS hardware architecture.

      Here is a link to show you what types of architectures you can expect to see (note the DS415+ is new and not in this list):

      http://forum.synology.com/wiki/index.php/What_kind_of_CPU_does_my_NAS_have

      and how these “package” files are created per hardware architecture:

      http://forum.synology.com/wiki/index.php/Synology_package_files

      When you browse Patters repository, you will only see the files – and therefore “packages” – that are valid for *your* NAS hardware architecture.

      Going on from there, if a new NAS hardware architecture is introduced, such repositories will simply be empty until specific files are created for this architecture.
      This is what John is hinting at in his post.

      (note: I’m not sure how up-to-date the links are, but the principle still stands)
      (also: you can hit the “reply” link beneath a post to reply to that post directly)

      Reply
  13. jolimp

    @James
    Are they both a connection to the internet ?
    4242 is the port for the data exchange
    now to have a view of ip adresses and other parameters, i suppose they exchange data with code42 servers (internet needed). it should be this

    Reply
  14. Matt Durkin

    All,
    for those having problems after adopting an old archive, specifically crashplan simply stops after you try and start it, could I recommend you take a look in /volume1/@appstore/CrashPlan/conf/my.services.xml
    I know this has previously been suggested as incorrect advice, but after spending an hour on this, I found that after adoption, the IP address of my old server appeared in that file in the section. I also found reference to it in one of the log files (service.log.0) which is what made me suspicious. This didn’t make sense as that old server is gone! I edited this entry to the correct IP of my new server, and the start stop problem magically goes away.
    I had also altered the stack size, but it had no impact. My archive isn’t really that big.

    Reply
    1. patters Post author

      Hi Matt – can you take a look back over your submission? I think WordPress dropped the section name you put in a tag (it doesn’t support those in comments). You could use quotes or brackets instead. Thanks!

      Reply
  15. Matt Durkin

    All (repost with more info),
    for those having problems after adopting an old archive, specifically crashplan simply stops after you try and start it, could I recommend you take a look in /volume1/@appstore/CrashPlan/conf/my.services.xml
    I know this has previously been suggested as incorrect advice, but after spending an hour on this, I found that after adoption, the IP address of my old server appeared in that file in the “serviceUIConfig” section. I also found reference to it in one of the log files (service.log.0) which is what made me suspicious. This didn’t make sense as that old server is gone! I edited this entry to the correct IP of my new server, and the start stop problem magically goes away.
    I had also altered the stack size, but it had no impact. My archive isn’t really that big.

    One other thought I had, somewhat in retrospect. I originally had my old data store online (running crashplan) alongside the new one. I copied the data and adopted the old server’s identity. This seemed to work and survive reboots, but I had noticed odd unexplained disk activity on the old server that confused me. It seems that somehow the old server was still being accessed. Only when I turned it off did the new one stop loading.

    The log file where I found the IP address of the old server was “service.log.0″.

    service.log.0:[09.30.14 23:19:19.143 WARN main com.backup42.service.CPService ] >>>>> CPService is already running on 192.168.1.10:4243 <<<<<

    I was looking for 'WARN' logs and was quite surprised to see my new server logging an error with the IP address of the old server! Incidentally, the old server wasn't running at this point, rather confusingly.

    I definitely did not configure this on the new server, so can only imagine the adopt process is somehow responsible. I also suspect some people have the "serviceUIConfig" set to 0.0.0.0 rather than a specific IP so may never come across this. I can only imagine that this is a crashplan bug, or maybe feature?

    I hope this post is helpful in helping to solve this problem for some other people!

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s