CrashPlan packages for Synology NAS

UPDATE – The instructions and notes on this page apply to all three versions of the package hosted on my repo: CrashPlan, CrashPlan PRO, and CrashPlan PROe.

CrashPlan is a popular online backup solution which supports continuous syncing. With this your NAS can become even more resilient – it could even get stolen or destroyed and you would still have your data. Whilst you can pay a small monthly charge for a storage allocation in the Cloud, one neat feature CrashPlan offers is for individuals to collaboratively backup their important data to each other – for free! You could install CrashPlan on your laptop and have it continuously protecting your documents to your NAS, even whilst away from home.

CrashPlan-Windows

CrashPlan is a Java application, and one that’s typically difficult to install on a NAS – therefore an obvious candidate for me to simplify into a package, given that I’ve made a few others. I tried and failed a few months ago, getting stuck at compiling the Jtux library for ARM CPUs (the Oracle Java for Embedded doesn’t come with any headers).

I noticed a few CrashPlan setup guides linking to my Java package, and decided to try again based on these: Kenneth Larsen’s blog post, the Vincesoft blog article for installing on ARM processor Iomega NAS units, and this handy PDF document which is a digest of all of them, complete with download links for the additional compiled ARM libraries. I used the PowerPC binaries Christophe had compiled on his chreggy.fr blog, so thanks go to him. I wanted make sure the package didn’t require the NAS to be bootstrapped, so I picked out the few generic binaries that were needed (bash, nice and cpio) directly from the Optware repo.

UPDATE – For version 3.2 I also had to identify and then figure out how to compile Tim Macinta’s fast MD5 library, to fix the supplied libmd5.so on ARM systems (CrashPlan only distributes libraries for x86). I’m documenting that process here in case more libs are required in future versions. I identified it from the error message in log/engine_error.log and by running objdump -x libmd5.so. I could see that the same Java_com_twmacinta_util_MD5_Transform_1native function mentioned in the error was present in the x86 lib but not in my compiled libmd5.so from W3C Libwww. I took the headers from an install of OpenJDK on a regular Ubuntu desktop. I then used the Linux x86 source from the download bundle on Tim’s website – the closest match – and compiled it directly on the syno using the command line from a comment in another version of that source:
gcc -O3 -shared -I/tmp/jdk_headers/include /tmp/fast-md5/src/lib/arch/linux_x86/MD5.c -o libmd5.so

Aside from the challenges of getting the library dependencies fixed for ARM and QorIQ PowerPC systems, there was also the matter of compliance – Code 42 Software’s EULA prohibits redistribution of their work. I had to make the syno package download CrashPlan for Linux (after the end user agrees their EULA), then I had to write my own script to extract this archive and mimic their installer, since their installer is interactive. It took a lot of slow testing, but I managed it!

CPPROe package info

My most recent package version introduces handling of the automatic updates which Code 42 sometimes publish to the clients. This has proved to be quite a challenge to get working as testing was very laborious. I can confirm that it worked with the update from CrashPlan PRO 3.2 to 3.2.1 , and from CrashPlan 3.2.1 to 3.4.1:

CrashPlan-update-repair

 

Installation

  • This package is for Marvell Kirkwood, Marvell Armada 370/XP, Intel and Freescale QorIQ/PowerQUICC PowerPC CPUs only, so please check which CPU your NAS has. It will work on an unmodified NAS, no hacking or bootstrapping required. It will only work on older PowerQUICC PowerPC models that are running DSM 5.0. It is technically possible to run CrashPlan on older DSM versions, but it requires chroot-ing to a Debian install. Christophe from chreggy.fr has recently released packages to automate this.
  • In the User Control Panel in DSM, enable the User Homes service.
  • Install the package directly from Package Center in DSM. In Settings -> Package Sources add my package repository URL which is http://packages.pcloadletter.co.uk.
  • You will need to install either one of my Java SE Embedded packages first (Java 6 or 7). Read the instructions on that page carefully too.
  • If you previously installed CrashPlan manually using the Synology Wiki, you can find uninstall instructions here.
 

Notes

  • The package downloads the CrashPlan installer directly from Code 42 Software, following acceptance of their EULA. I am complying with their wish that no one redistributes it.
  • CrashPlan is installed in headless mode – backup engine only. This is configured by a desktop client, but operates independently of it.
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 512MB). This can be overridden in a persistent way if you are backing up very large backup sets by editing /volume1/@appstore/CrashPlan/syno_package.vars. If you’re considering buying a NAS purely to use CrashPlan and intend to back up more than a few hundred GB then I strongly advise buying one of the Intel models which come with 1GB RAM and can be upgraded to 3GB very cheaply. RAM is very limited on the ARM ones. 128MB RAM on the J series means CrashPlan is running with only one fifth of the recommended heap size, so I doubt it’s viable for backing up very much at all. My DS111 has 256MB of RAM and currently backs up around 60GB with no issues. I have found that a 512MB heap was insufficient to back up more than 2TB of files on a Windows server. It kept restarting the backup engine every few minutes until I increased the heap to 1024MB.
  • As with my other syno packages, the daemon user account password is randomized when it is created using the openssl binary. DSM Package Center runs as the root user so my script starts the package using an su command. This means that you can change the password yourself and CrashPlan will still work.
  • The default location for saving friends’ backups is set to /volume1/crashplan/backupArchives (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • The first time you run the server you will need to stop it and restart it before you can connect the client. This is because a config file that’s only created on first run needs to be edited by one of my scripts. The engine is then configured to listen on all interfaces on the default port 4243.
  • Once the engine is running, you can manage it by installing CrashPlan on another computer, and editing the file conf/ui.properties on that computer so that this line:
    #serviceHost=127.0.0.1
    is uncommented (by removing the hash symbol) and set to the IP address of your NAS, e.g.:
    serviceHost=192.168.1.210
    On Windows you can also disable the CrashPlan service if you will only use the client.
  • If you need to manage CrashPlan from a remote location, I suggest you do so using SSH tunnelling as per this support document.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache. Upgrades can now take place without requiring a login from the client afterwards.
  • If you remove the package completely and re-install it later, you can re-attach to previous backups. When you log in to the Desktop Client with your existing account after a re-install, you can select “adopt computer” to merge the records, and preserve your existing backups. I haven’t tested whether this also re-attaches links to friends’ CrashPlan computers and backup sets, though the latter does seem possible in the Friends section of the GUI. It’s probably a good idea to test that this survives a package reinstall before you start relying on it. Sometimes, particularly with CrashPlan PRO I think, the adopt option is not offered. In this case you can log into CrashPlan Central and retrieve your computer’s GUID. On the CrashPlan client, double-click on the logo in the top right and you’ll enter a command line mode. You can use the GUID command to change the system’s GUID to the one you just retrieved from your account.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you’re trying to troubleshoot an issue you will need to use an SSH session to inspect the two engine log files which are:
    /volume1/@appstore/CrashPlan/log/engine_output.log
    /volume1/@appstore/CrashPlan/log/engine_error.log
  • When CrashPlan downloads and attempts to run an automatic update, the script will most likely fail and stop the package. This is typically caused by syntax differences with the Synology versions of certain Linux shell commands (like rm, mv, or ps). You will need to wait several minutes in the event of this happening before you take action, because the update script tries to restart CrashPlan 10 times at 10 second intervals. After this, you simply start the package again in Package Center and my scripts will fix the update, then run it. One final package restart is required before you can connect with the CrashPlan Desktop client (remember to update that too).
  • After their backup is seeded some users may wish to schedule the CrashPlan engine using cron so that it only runs at certain times. This is particularly useful on ARM systems because CrashPlan currently prevents hibernation while it is running (unresolved issue, reported to Code 42). To schedule, edit /etc/crontab and add the following entries for starting and stopping CrashPlan:
    55 2 * * * root /var/packages/CrashPlan/scripts/start-stop-status start
    0  4 * * * root /var/packages/CrashPlan/scripts/start-stop-status stop

    This example would configure CrashPlan to run daily between 02:55 and 04:00am. CrashPlan by default will scan the whole backup selection for changes at 3:00am so this is ideal. The simplest way to edit crontab if you’re not really confident with Linux is to install Merty’s Config File Editor package, which requires the official Synology Perl package to be installed too (since DSM 4.2). After editing crontab you will need to restart the cron daemon for the changes to take effect:
    /usr/syno/etc.defaults/rc.d/S04crond.sh stop
    /usr/syno/etc.defaults/rc.d/S04crond.sh start

    It is vitally important that you do not improvise your own startup commands or use a different account because this will most likely break the permissions on the config files, causing additional problems. The package scripts are designed to be run as root, and they will in turn invoke the CrashPlan engine using its own dedicated user account.
  • If you update DSM later, you will need to re-install the Java package or else UTF-8 and locale support will be broken by the update.
  • If you decide to sign up for one of CrashPlan’s paid backup services as a result of my work on this, I would really appreciate it if you could use this affiliate link, or consider donating using the PayPal button on the right.
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology Package wiki.

installer.sh

#!/bin/sh

#--------CRASHPLAN installer script
#--------package maintained at pcloadletter.co.uk

DOWNLOAD_PATH="http://download.crashplan.com/installs/linux/install/${SYNOPKG_PKGNAME}"
[ "${SYNOPKG_PKGNAME}" == "CrashPlan" ] && DOWNLOAD_FILE="CrashPlan_3.6.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPRO" ] && DOWNLOAD_FILE="CrashPlanPRO_3.6.3_Linux.tgz"
[ "${SYNOPKG_PKGNAME}" == "CrashPlanPROe" ] && DOWNLOAD_FILE="CrashPlanPROe_3.6.3_Linux.tgz"
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
CPI_FILE="${SYNOPKG_PKGNAME}_*.cpi"
EXTRACTED_FOLDER="${SYNOPKG_PKGNAME}-install"
DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_PASS="`openssl rand 12 -base64 2>/dev/null`"
DAEMON_ID="${SYNOPKG_PKGNAME} daemon user"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}"
VARS_FILE="${OPTDIR}/install.vars"
ENGINE_SCRIPT="CrashPlanEngine"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tgz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
#the Manifest folder is where friends' backup data is stored
#we set it outside the app folder so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
UPGRADE_FILES="syno_package.vars conf/my.service.xml conf/service.login conf/service.model"
UPGRADE_FOLDERS="log cache"

source /etc/profile
PUBLIC_FOLDER="`cat /usr/syno/etc/smb.conf | sed -r '/\/public$/!d;s/^.*path=(\/volume[0-9]{1,4}\/public).*$/\1/'`"


preinst ()
{
  if [ -z ${PUBLIC_FOLDER} ]; then
    echo "A shared folder called 'public' could not be found - note this name is case-sensitive. "
    echo "Please create this using the Shared Folder DSM Control Panel and try again."
    exit 1
  fi

  if [ -z ${JAVA_HOME} ]; then
    echo "Java is not installed or not properly configured. JAVA_HOME is not defined. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  if [ ! -f ${JAVA_HOME}/bin/java ]; then
    echo "Java is not installed or not properly configured. The Java binary could not be located. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  #is the User Home service enabled?
  UH_SERVICE=maybe
  synouser --add userhometest Testing123 "User Home test user" 0 "" ""
  UHT_HOMEDIR=`cat /etc/passwd | sed -r '/User Home test user/!d;s/^.*:User Home test user:(.*):.*$/\1/'`
  if echo $UHT_HOMEDIR | grep '/var/services/homes/' > /dev/null; then
    if [ ! -d $UHT_HOMEDIR ]; then
      UH_SERVICE=false
    fi
  fi
  synouser --del userhometest
  #remove home directory (needed since DSM 4.1)
  [ -e /var/services/homes/userhometest ] && rm -r /var/services/homes/userhometest
  if [ "${UH_SERVICE}" == "false" ]; then
    echo "The User Home service is not enabled. Please enable this feature in the User control panel in DSM."
    exit 1
  fi
  
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, "
        echo "which was \"${WGET_URL}\" "
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. "
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  #create daemon user
  synouser --add ${DAEMON_USER} ${DAEMON_PASS} "${DAEMON_ID}" 0 "" ""
  
  #save the daemon user's homedir as variable in that user's profile
  #this is needed because new users seem to inherit a HOME value of /root which they have no permissions for.
  su - ${DAEMON_USER} -s /bin/sh -c "echo export HOME=\'${DAEMON_HOME}\' >> .profile"

  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xzf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}

  #extract main archive
  cd ${TEMP_FOLDER}
  tar xzf ${TEMP_FOLDER}/${DOWNLOAD_FILE} && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE} 
  
  #extract cpio archive
  cd ${SYNOPKG_PKGDEST}
  cat "${TEMP_FOLDER}/${EXTRACTED_FOLDER}"/${CPI_FILE} | gzip -d -c | ${SYNOPKG_PKGDEST}/bin/cpio -i --no-preserve-owner
  
  echo "#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#you probably only want more than the recommended 512M if you're backing up extremely large volumes of files" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=512M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars

  #the following Package Center variables will need retrieving if launching CrashPlan via cron
  echo "CRON_SYNOPKG_PKGNAME='${SYNOPKG_PKGNAME}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "CRON_SYNOPKG_PKGDEST='${SYNOPKG_PKGDEST}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars

  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/${ENGINE_SCRIPT} ${OPTDIR}/bin
  cp ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/scripts/run.conf ${OPTDIR}/bin
  mkdir -p ${MANIFEST_FOLDER}/backupArchives    
  chown -R ${DAEMON_USER} ${MANIFEST_FOLDER}
  
  #save install variables which Crashplan expects its own installer script to create
  echo TARGETDIR=${SYNOPKG_PKGDEST} > ${VARS_FILE}
  echo BINSDIR=/bin >> ${VARS_FILE}
  echo MANIFESTDIR=${MANIFEST_FOLDER}/backupArchives >> ${VARS_FILE}
  #leave these ones out which should help upgrades from Code42 to work (based on examining an upgrade script)
  #echo INITDIR=/etc/init.d >> ${VARS_FILE}
  #echo RUNLVLDIR=/usr/syno/etc/rc.d >> ${VARS_FILE}
  echo INSTALLDATE=`date +%Y%m%d` >> ${VARS_FILE}
  echo JAVACOMMON=\${JAVA_HOME}/bin/java >> ${VARS_FILE}
  cat ${TEMP_FOLDER}/${EXTRACTED_FOLDER}/install.defaults >> ${VARS_FILE}
  
  #remove temp files
  rm -r ${TEMP_FOLDER}/${EXTRACTED_FOLDER}
  
  #change owner of CrashPlan folder tree
  chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
  
  exit 0
}


preuninst ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi

  if [ "${LIBFFI_SYMLINK}" == "YES" ]; then
    rm /lib/libffi.so.5
  fi
  
  #if it doesn't exist, but is still a link then it's a broken link and should also be deleted
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi
    
  #remove daemon user
  synouser --del ${DAEMON_USER}
  
  #remove daemon user's home directory (needed since DSM 4.1)
  [ -e /var/services/homes/${DAEMON_USER} ] && rm -r /var/services/homes/${DAEMON_USER}
  
 exit 0
}

preupgrade ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  #if identity and config data exists back it up
  if [ -d ${DAEMON_HOME}/.crashplan ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/conf
    mv ${DAEMON_HOME}/.crashplan ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${OPTDIR}/${FILE_TO_MIGRATE} ]; then
        cp ${OPTDIR}/${FILE_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated identity and config data from the previous version
  if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/.crashplan ]; then
    mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/.crashplan ${DAEMON_HOME}
    for FILE_TO_MIGRATE in ${UPGRADE_FILES}; do
      if [ -f ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE} ]; then
        mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FILE_TO_MIGRATE} ${OPTDIR}/${FILE_TO_MIGRATE}
      fi
    done
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      mv ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/conf
    rmdir ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
    
    #make CrashPlan log entry
    TIMESTAMP="`date +%D` `date +%I:%M%p`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
    
    #daemon user has been deleted and recreated so we need to reset ownership (new UID)
    chown -R ${DAEMON_USER} ${DAEMON_HOME}/.crashplan
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
    
    #read manifest location from the migrated XML config, and reset ownership on that path too
    if [ -f ${SYNOPKG_PKGDEST}/conf/my.service.xml ]; then
      MANIFEST_FOLDER=`cat ${SYNOPKG_PKGDEST}/conf/my.service.xml | grep "<manifestPath>" | cut -f2 -d'>' | cut -f1 -d'<'`
      chown -R ${DAEMON_USER} ${MANIFEST_FOLDER}
    fi
    
    #the following Package Center variables will need retrieving if launching CrashPlan via cron
    grep "^CRON_SYNOPKG_PKGNAME" ${SYNOPKG_PKGDEST}/syno_package.vars > /dev/null \
     || echo "CRON_SYNOPKG_PKGNAME='${SYNOPKG_PKGNAME}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
    grep "^CRON_SYNOPKG_PKGDEST" ${SYNOPKG_PKGDEST}/syno_package.vars > /dev/null \
     || echo "CRON_SYNOPKG_PKGDEST='${SYNOPKG_PKGDEST}'" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN start-stop-status script
#--------package maintained at pcloadletter.co.uk

if [ "${SYNOPKG_PKGNAME}" == "" ]; then
  #if this script has been invoked by cron then some Package Center vars are undefined
  source "`dirname $0`/../target/syno_package.vars"
  SYNOPKG_PKGNAME="${CRON_SYNOPKG_PKGNAME}" 
  SYNOPKG_PKGDEST="${CRON_SYNOPKG_PKGDEST}"
  CRON_LAUNCHED=True
fi

#Main variables section
DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/crashplan"
LOG_FILE="${SYNOPKG_PKGDEST}/log/history.log.0"
ENGINE_SCRIPT="CrashPlanEngine"
APP_NAME="CrashPlanService"
SCRIPTS_TO_EDIT="${ENGINE_SCRIPT}"
ENGINE_CFG="run.conf"
LIBFFI_SO_NAMES="5 6" #armada370 build of libjnidispatch.so is newer, and uses libffi.so.6
CFG_PARAM="SRV_JAVA_OPTS"
source ${OPTDIR}/install.vars

JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"`
SYNO_CPU_ARCH="`uname -m`"


case $1 in
  start)    
    #set the current timezone for Java so that log timestamps are accurate
    #we need to use the modern timezone names so that Java can figure out DST 
    SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
    SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
    grep "^export TZ" ${DAEMON_HOME}/.profile > /dev/null \
     && sed -i "s%^export TZ=.*$%export TZ='${SYNO_TZ}'%" ${DAEMON_HOME}/.profile \
     || echo export TZ=\'${SYNO_TZ}\' >> ${DAEMON_HOME}/.profile
    #this package stores the machine identity in the daemon user home directory
    #so we need to remove any old config data from previous manual installations or startups
    [ -d /var/lib/crashplan ] && rm -r /var/lib/crashplan

    #check persistent variables from syno_package.vars
    USR_MAX_HEAP=0
    if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
      source ${SYNOPKG_PKGDEST}/syno_package.vars
    fi
    USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

    #create or repair libffi symlink if a DSM upgrade has removed it
    for FFI_VER in ${LIBFFI_SO_NAMES}; do 
      if [ -e ${OPTDIR}/lib/libffi.so.${FFI_VER} ]; then
        if [ ! -e /lib/libffi.so.${FFI_VER} ]; then
          #if it doesn't exist, but is still a link then it's a broken link and should be deleted
          [ -L /lib/libffi.so.${FFI_VER} ] && rm /lib/libffi.so.${FFI_VER}
          ln -s ${OPTDIR}/lib/libffi.so.${FFI_VER} /lib/libffi.so.${FFI_VER}
        fi
      fi
    done

    #fix up some of the binary paths and fix some command syntax for busybox 
    #moved this to start-stop-status from installer.sh because Code42 push updates and these
    #new scripts will need this treatment too
    FIND_TARGETS=
    for TARGET in ${SCRIPTS_TO_EDIT}; do
      FIND_TARGETS="${FIND_TARGETS} -o -name ${TARGET}"
    done
    find ${OPTDIR} \( -name \*.sh ${FIND_TARGETS} \) | while IFS="" read -r FILE_TO_EDIT; do
      if [ -e ${FILE_TO_EDIT} ]; then
        #this list of substitutions will probably need expanding as new CrashPlan updates are released
        sed -i "s%^#!/bin/bash%#!${SYNOPKG_PKGDEST}/bin/bash%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)nice -n%\1${SYNOPKG_PKGDEST}/bin/nice -n%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)(/bin/ps|ps) [^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
        sed -i -r "s%\`ps [^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
        sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
        sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
      fi
    done

    #any downloaded upgrade script will usually have failed until the above changes are made so we need to
    #find it and start it, if it exists
    UPGRADE_SCRIPT=`find ${OPTDIR}/upgrade -name "upgrade.sh"`
    if [ -n "${UPGRADE_SCRIPT}" ]; then
      rm ${OPTDIR}/${ENGINE_SCRIPT}.pid
      SCRIPT_HOME=`dirname $UPGRADE_SCRIPT`

      #make CrashPlan log entry
      TIMESTAMP="`date +%D` `date +%I:%M%p`"
      echo "I ${TIMESTAMP} Synology repairing upgrade in ${SCRIPT_HOME}" >> ${LOG_FILE}

      mv ${SCRIPT_HOME}/upgrade.log ${SCRIPT_HOME}/upgrade.log.old
      chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
      su - ${DAEMON_USER} -s /bin/sh -c "cd ${SCRIPT_HOME} ; . upgrade.sh"
      mv ${SCRIPT_HOME}/upgrade.sh ${SCRIPT_HOME}/upgrade.sh.old
      exit 0
    fi

    #updates may also overwrite our native binaries
    if [ "${SYNO_CPU_ARCH}" == "x86_64" ]; then
      cp ${SYNOPKG_PKGDEST}/bin/synology-x86-glibc-2.4-shim.so ${OPTDIR}/lib
    else    
      cp -f ${SYNOPKG_PKGDEST}/bin/libjtux.so ${OPTDIR}
      cp -f ${SYNOPKG_PKGDEST}/bin/jna-3.2.5.jar ${OPTDIR}/lib
      cp -f ${SYNOPKG_PKGDEST}/bin/libffi.so.* ${OPTDIR}/lib
    fi

    #set appropriate Java max heap size
    RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
    if [ $RAM -le 128 ]; then
      JAVA_MAX_HEAP=80
    elif [ $RAM -le 256 ]; then
      JAVA_MAX_HEAP=192
    elif [ $RAM -le 512 ]; then
      JAVA_MAX_HEAP=384
    #CrashPlan's default max heap is 512MB
    elif [ $RAM -gt 512 ]; then
      JAVA_MAX_HEAP=512
    fi
    if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
      JAVA_MAX_HEAP=${USR_MAX_HEAP}
    fi   
    if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
      #can't have a max heap lower than min heap (ARM low RAM systems)
      $JAVA_MAX_HEAP=$JAVA_MIN_HEAP
    fi
    sed -i -r "s/(^${CFG_PARAM}=.*) -Xmx[0-9]+[mM] (.*$)/\1 -Xmx${JAVA_MAX_HEAP}m \2/" "${OPTDIR}/bin/${ENGINE_CFG}"
    
    #disable the use of the x86-optimized external Fast MD5 library if running on ARM and QorIQ CPUs
    #seems to be the default behaviour now but that may change again
    if [ "${SYNO_CPU_ARCH}" != "x86_64" ]; then
      grep "^${CFG_PARAM}=.*c42\.native\.md5\.enabled" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
       || sed -i -r "s/(^${CFG_PARAM}=\".*)\"$/\1 -Dc42.native.md5.enabled=false\"/" "${OPTDIR}/bin/${ENGINE_CFG}"
    fi

    #move the Java temp directory from the default of /tmp
    grep "^${CFG_PARAM}=.*Djava\.io\.tmpdir" "${OPTDIR}/bin/${ENGINE_CFG}" > /dev/null \
     || sed -i -r "s%(^${CFG_PARAM}=\".*)\"$%\1 -Djava.io.tmpdir=${TEMP_FOLDER}\"%" "${OPTDIR}/bin/${ENGINE_CFG}"

    #reset ownership of all files to daemon user, so that manual edits to config files won't cause problems
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
    chown -R ${DAEMON_USER} ${DAEMON_HOME}    

    #now edit the XML config file, which only exists after first run
    if [ -f ${SYNOPKG_PKGDEST}/conf/my.service.xml ]; then

      #allow direct connections from CrashPlan Desktop client on remote systems
      #you must edit the value of serviceHost in conf/ui.properties on the client you connect with
      #users report that this value is sometimes reset so now it's set every service startup 
      sed -i "s/<serviceHost>127\.0\.0\.1<\/serviceHost>/<serviceHost>0\.0\.0\.0<\/serviceHost>/" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
      
      #this change is made only once in case you want to customize the friends' backup location
      if [ "${MANIFEST_PATH_SET}" != "True" ]; then

        #keep friends' backup data outside the application folder to make accidental deletion less likely 
        sed -i "s%<manifestPath>.*</manifestPath>%<manifestPath>${MANIFEST_FOLDER}/backupArchives/</manifestPath>%" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
        echo "MANIFEST_PATH_SET=True" >> ${SYNOPKG_PKGDEST}/syno_package.vars
      fi

      #since CrashPlan version 3.5.3 the value javaMemoryHeapMax also needs setting to match that used in bin/run.conf
      sed -i -r "s%(<javaMemoryHeapMax>)[0-9]+[mM](</javaMemoryHeapMax>)%\1${JAVA_MAX_HEAP}m\2%" "${SYNOPKG_PKGDEST}/conf/my.service.xml"
    else
      echo "Wait a few seconds, then stop and restart the package to allow desktop client connections." > "${SYNOPKG_TEMP_LOGFILE}"
    fi
    if [ "${CRON_LAUNCHED}" == "True" ]; then
      [ -e /var/packages/${SYNOPKG_PKGNAME}/enabled ] || touch /var/packages/${SYNOPKG_PKGNAME}/enabled
    fi

    #delete any stray Java temp files
    find /tmp -name "jna*.tmp" -user ${DAEMON_USER} | while IFS="" read -r FILE_TO_DEL; do
      if [ -e ${FILE_TO_DEL} ]; then
        rm ${FILE_TO_DEL}
      fi
    done

    #increase the system-wide maximum number of open files from Synology default of 24466
    echo "65536" > /proc/sys/fs/file-max

    #raise the maximum open file count from the Synology default of 1024 - thanks Casper K. for figuring this out
    #http://support.code42.com/Administrator/3.6_And_4.0/Troubleshooting/Too_Many_Open_Files
    ulimit -n 65536

    if [ "${SYNO_CPU_ARCH}" == "x86_64" ]; then
      #Intel synos running older DSM need rwojo's glibc version shim for inotify support
      #https://github.com/wojo/synology-x86-glibc-2.4-shim
      GLIBC_VER="`/lib/libc.so.6 | grep -m 1 version | sed -r "s/^[^0-9]*([0-9].*[0-9])\,.*$/\1/"`"
      if [ "${GLIBC_VER}" == "2.3.6" ]; then
        su - ${DAEMON_USER} -s /bin/sh -c "LD_PRELOAD=${SYNOPKG_PKGDEST}/lib/synology-x86-glibc-2.4-shim.so ${OPTDIR}/bin/${ENGINE_SCRIPT} start"
      else
        su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} start"
      fi
    else
      su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} start"
    fi
    exit 0
  ;;

  stop)
    su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
    if [ "${CRON_LAUNCHED}" == "True" ]; then
      [ -e /var/packages/${SYNOPKG_PKGNAME}/enabled ] && rm /var/packages/${SYNOPKG_PKGNAME}/enabled
    fi
    exit 0
  ;;

  status)
    PID=`/bin/ps w| grep "app=${APP_NAME}" | grep -v grep | awk '{ print $1 }'`
    if [ -n "$PID" ]; then
      exit 0
    else
      exit 1
    fi
  ;;

  log)
    echo "${LOG_FILE}"
    exit 0
  ;;
esac
 

Changelog:

  • 0027 Fixed open file handle limit for very large backup sets (ulimit fix)
  • 0026 Updated all CrashPlan clients to version 3.6.3, improved handling of Java temp files
  • 0025 glibc version shim no longer used on Intel Synology models running DSM 5.0
  • 0024 Updated to CrashPlan PROe 3.6.1.4 and added support for PowerPC 2010 Synology models running DSM 5.0
  • 0023 Added support for Intel Atom Evansport and Armada XP CPUs in new DSx14 products
  • 0022 Updated all CrashPlan client versions to 3.5.3, compiled native binary dependencies to add support for Armada 370 CPU (DS213j), start-stop-status.sh now updates the new javaMemoryHeapMax value in my.service.xml to the value defined in syno_package.vars
  • 0021 Updated CrashPlan to version 3.5.2
  • 0020 Fixes for DSM 4.2
  • 018 Updated CrashPlan PRO to version 3.4.1
  • 017 Updated CrashPlan and CrashPlan PROe to version 3.4.1, and improved in-app update handling
  • 016 Added support for Freescale QorIQ CPUs in some x13 series Synology models, and installer script now downloads native binaries separately to reduce repo hosting bandwidth, PowerQUICC PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 015 Added support for easy scheduling via cron – see updated Notes section
  • 014 DSM 4.1 user profile permissions fix
  • 013 implemented update handling for future automatic updates from Code 42, and incremented CrashPlanPRO client to release version 3.2.1
  • 012 incremented CrashPlanPROe client to release version 3.3
  • 011 minor fix to allow a wildcard on the cpio archive name inside the main installer package (to fix CP PROe client since Code 42 Software had amended the cpio file version to 3.2.1.2)
  • 010 minor bug fix relating to daemon home directory path
  • 009 rewrote the scripts to be even easier to maintain and unified as much as possible with my imminent CrashPlan PROe server package, fixed a timezone bug (tightened regex matching), moved the script-amending logic from installer.sh to start-stop-status.sh with it now applying to all .sh scripts each startup so perhaps updates from Code42 might work in future, if wget fails to fetch the installer from Code42 the installer will look for the file in the public shared folder
  • 008 merged the 14 package scripts each (7 for ARM, 7 for Intel) for CP, CP PRO, & CP PROe – 42 scripts in total – down to just two! ARM & Intel are now supported by the same package, Intel synos now have working inotify support (Real-Time Backup) thanks to rwojo’s shim to pass the glibc version check, upgrade process now retains login, cache and log data (no more re-scanning), users can specify a persistent larger max heap size for very large backup sets
  • 007 fixed a bug that broke CrashPlan if the Java folder moved (if you changed version)
  • 006 installation now fails without User Home service enabled, fixed Daylight Saving Time support, automated replacing the ARM libffi.so symlink which is destroyed by DSM upgrades, stopped assuming the primary storage volume is /volume1, reset ownership on /var/lib/crashplan and the Friends backup location after installs and upgrades
  • 005 added warning to restart daemon after 1st run, and improved upgrade process again
  • 004 updated to CrashPlan 3.2.1 and improved package upgrade process, forced binding to 0.0.0.0 each startup
  • 003 fixed ownership of /volume1/crashplan folder
  • 002 updated to CrashPlan 3.2
  • 001 intial public release
 
 
About these ads

2,206 thoughts on “CrashPlan packages for Synology NAS

  1. catmambo

    Can’t for the life of me get the Java installation to install correctly. I have the exact same file that the popup tells me =, in the shared ‘public’ folder, yet it keeps saying it can’t see it…what am I doing wrong, or can someone share a link to the correct version which has worked on theirs . I have a DS411

    Thanks

    Reply
    1. catmambo

      Doh – Yup, I was using Chrome, once I used Firefox it worked a charm. Hope this helps someone else

      Reply
  2. bundyo

    Hmm, seems libffi.so.5 should be linked/copied to /usr/lib after Synology system update or CrashPlan won’t sync (stops watching the selected folders).

    Reply
  3. James Berry

    I have been holding off upgrading to the new DSM (I am on DSM4, but not the new patch which has just come out).

    Does everything just upgrade seamlessly or do I need to re-install crashplan and/or java afterwards?

    Reply
    1. Chris

      I had to uninstall crashplan and reinstall it. Its syncronising now. Dont know why but it appeared to run for a couple of days but never backed anything up. Just said “Waiting for backup” or something when it was set to always backup. Seems to be working now, will update when its done

      Reply
      1. bundyo

        The issue is that libffi.so.5 (or a link, didn’t check) is removed from /usr/lib after upgrade and causes the “waiting for backup” message. Symlinking/copying it back should resolve it.

      2. patters Post author

        It’s a symlink – I fixed that in the latest update. The start-stop-status script checks for it at each startup on ARM systems.

  4. Bjarne Rasmussen

    @patters.
    I think this solution is GREAT. If only I could get it working :-)
    I know you have put in a lot of work in this – but you asked me about some details regarding my problem -> http://pcloadletter.co.uk/2012/01/30/crashplan-syno-package/comment-page-3/#comment-3632 … and I hope my answer leads to a new version/script as you mentioned.

    But is there any status on the project – or can I do anything to help / give you more information?

    Kind Regards

    Reply
    1. patters Post author

      Have a try with the new version – it doesn’t assume the existence of /volume1 so hopefully it should work for you now.

      Reply
  5. patters Post author

    I have incremented all the builds with quite a few enhancements – see the changelog at the bottom of the blog post.

    The CPPROe client package is updated to version 3.2.1. Don’t upgrade unless your storage provider is running that version on their backend. If you need the older CPPROe packages, they can be downloaded and installed manually from:
    http://dl.dropbox.com/u/1188556/blog/old/crashplanproe20100308-88f8281-003.spk
    http://dl.dropbox.com/u/1188556/blog/old/crashplanproe20100308-x86-003.spk

    Reply
    1. Bjarne Rasmussen

      GREAT WORK !!
      I had never hoped that a fully functional solution would “pop up” just like that :-)
      This was a really nice surprise – Thank you very much!

      My disfunctional solution – that gave me a message “Wait a few seconds, then stop and restart the package to allow desktop client connections.” simply just needed an update – and then it works,

      Hands down – patters you are the man!

      I owe you a cold one – for sure.

      I hope that everyone else enjoys this as much as me ….. THUMBS UP !!!!

      Reply
      1. patters Post author

        No probs. Cold ones gladly received using the PayPal donate button on the right hand side of this page :)

    1. patters Post author

      I don’t think there’s much that can be done. I’ve just been researching it and I see that inotify support was integrated from glibc 2.4 onwards, but the Intel synos use 2.3.6. The ARM synos have glibc 2.5 which is why they’re fine.

      There is a patch to add it to 2.3.6, but you’d have to recompile glibc which is out of my depth:
      http://www.linuxfromscratch.org/patches/lfs/6.2/glibc-2.3.6-inotify-1.patch

      I tried to compile glibc once and got tied in knots. If someone has the knowledge it could be interesting to try, though I guess it is possible that Synology already built it with this patch included. Perhaps it’s just CrashPlan’s version-matching that is denying us the functionality. It’s odd that the /proc/sys/fs/inotify folder exists on Intel synos if inotify support wasn’t included. Perhaps someone who owns an Intel syno can compile inotify-tools and see if they work.

      Reply
      1. rwojo

        Before I posted this I did indeed get inotify-tools to compile and run successfully on my Intel DS411+. It works great on glibc =2.4. The options I see are:

        1) Update glibc: I’m not touching it :) I opened a ticket with Synology and I’ll wait. It’s too dangerous to upgrade the entire OS glibc, and it’s just too much work to get it to compile and try with LD_LIBRARY_PATH just for CP.

        2) Spoof the version of glibc from gnu_get_libc_version, add the APIs as necessary using LD_PRELOAD. This could work, maybe I’ll try it.

        3) Hack up CP to remove the glibc checks, but who knows what it would rely on that is missing from glibc 2.3.6 and only exists in glibc 2.4.

        3) Wait for Synology to upgrade DSM 4.x to glibc 2.4+

      2. patters Post author

        I like the look of option 2 also. Keep us posted!
        I also raised a support request, so we may find out the official reason.

        EDIT – just noticed, do some Intel synos have glibc 2.3.6 and others have 2.4 then?

      3. patters Post author

        I got a response from CrashPlan support yesterday, but they didn’t answer my question (which glibc is the minimum requirement?). It was just a “headless systems are not supported” :(

      4. Bjarne Rasmussen

        @patters
        I’ve got a DS411+II intel syno. How can I check version of glibc so I can add info to your question?

      5. rwojo

        Yay, method #2 works. It will notify me of file changes instantly in the GUI when I modify the filesystem now.

        However, the counts are weird! For example if I just copy and paste the file, the count of Todo files is 1. If I paste again, it goes up to 4? Something must be wrong with the counts in the CP engine/GUI. Can someone verify that it happens on the ARM version on Synology NASes, too?

        I’ve posted the code and binary at https://github.com/wojo/synology-x86-glibc-2.4-shim

        I have to run out for a while, but everything looks great from initial testing. Perhaps you can create a beta package and early adopters can bang on it?

      6. rwojo

        Interesting, the file count is based on inotify messages for nearly anything it seems, even accessing the directory. Hah weird, probably should notify CrashPlan if it indeed a bug because it artificially inflates the count.

        Of course I’m curious if this happens on the ARM version, too. When I have time I’ll very on my Macs, too.

      7. patters Post author

        I have just found time to check this on mine. I get true counts on ARM. Replacing a file, or accessing it does not increase the count. I thought for a minute that the count did go up to 2 unexpectedly for a single jpg I pasted from a Windows machine, but that was because Windows created a thumbs.db file when I viewed it. Are you sure your testing didn’t have something similar clouding the result (a Mac creating those hidden files in each folder you accessed for instance)?

      8. rwojo

        Oh, and to your question, no, the comment I posted originally was supposed to say glibc less than 2.4, specifically 2.3.6 works with inotify. I must have just typo’d that because less than or equal doesn’t make sense either :) Does this accept HTML? Tests: < < test

      9. rwojo

        Counts are fine when I do things with ‘touch’ for example. It must be weird things that OS X does (like the .DS_Store files, etc). I did see a few rapid new files not get counted until the next update, but the count was always eventually spot on.

      10. patters Post author

        Great, that’s good news. On account of how hard all this was getting to maintain, I spent a ridiculous amount of time yesterday rewriting all the scripts and merging them down to just two – using superzebulon’s method, where each script is just a stub that invokes the main one with:
        #!/bin/sh
        . `dirname $0`/installer.sh
        `basename $0`

        At one point a stray backtick in the main script cost me several hours! The flip side of unification is that a small error in an unrelated area of the script can cause problems in another, and you get no help debugging.

        Now all product versions (CP, CP PRO, CP PROe – on ARM and intel) use the same unified scripts, just with different vars defined at the top. I implemented your suggestions and vastly improved the upgrade process. Logs are migrated, so is the cache (no more rescanning), and you no longer have to log back in with the client afterwards.

        So I just need to incorporate your glibc version shim. I’ll put something up for testing soon.

  6. rwojo

    For NAS units with more than 1GB (I have my DS411+ upgraded to 2GB) it’d be nice to bump the Java heap size to 1GB, something like this in the start-stop-status:

    if [ $RAM -le 128 ]; then
    JAVA_MAX_HEAP=80M
    elif [ $RAM -le 256 ]; then
    JAVA_MAX_HEAP=192M
    elif [ $RAM -le 512 ]; then
    JAVA_MAX_HEAP=384M
    elif [ $RAM -le 1024 ]; then
    JAVA_MAX_HEAP=512M
    elif [ $RAM -gt 1024 ]; then
    JAVA_MAX_HEAP=1024M
    fi

    NAS units usually have a lot of daa on them and this helps CP maintain data structures for such a large amount of files and folders it seems. Perhaps this can be configured somehow in the GUI someday, or able to be overridden from a file that sticks around between upgrades?

    Reply
    1. rwojo

      Hmm I think I found a bug in the start-stop-status script. My default CP run.conf contains 512m, not 512M. Therefore the sed line isn’t working like it should replacing that value. This should either be made case insensitive or just match the CP default of 512m and replace with 80m, 192m, etc.

      Reply
      1. patters Post author

        I’ll add that 1024 heap to the next version. Are you sure about that 512m value though? The ARM and Intel packages pull the same .tgz installer file from crashplan.com and mine substituted ok to 192M. I can make it case insensitive in future though, just in case.

      2. rwojo

        Hah, while doing testing I saw it change from ‘m’ to ‘M’. I have no idea why. Probably safe to just match [mM] :)

      3. rwojo

        After testing, it turns out 1GB adds quite a bit of memory pressure on my box because of what I’m running.

        Do you have a suggestion on being able to configure the max heap amount that would be user configurable and sticky between upgrades? Perhaps a config file that can be placed somewhere?

        For now, I’m moving it back to 768mb or so.

      4. rwojo

        Also, besides being able to source JAVA_MAX_HEAP from somewhere else, it looks like you can just change the JAVA_MAX_HEAP=xxxMB and rerun the script because sed is expecting to replace 512m to something else. I changed that to the following:

        ed -i “s/-Xmx[0-9]\+[mM]/-Xmx${JAVA_MAX_HEAP}/g” ${SYNOPKG_PKGDEST}/bin/run.conf

  7. Einstijn

    Amazing job and good instructions.

    Instructions work for crashplan cient 3.2.1 (JAVA7) on a DS212+, DSM 4.0-2219. Uploading max 3,0Mbps takes CPU to 90% (setting).

    Is it possible to do any optimizing settings on the DS212+ as well?

    Reply
    1. patters Post author

      The CPU use is caused by the hashing, compressing and de-dupe during backup. Once the seeding is done that will tail off. There’s not much to optimize really.

      Reply
      1. Einstijn

        I was hoping for better upload speeds with my 50/50 connection then 2-3MBps, that’s all……

  8. patters Post author

    I believe I have finally solved a longstanding issue for some of you who were consistently reporting that you had to adopt your system every time you restarted the package. Well I got this problem myself after updating to version 007 and it took me a while to figure out.

    CrashPlan will save its .identity file in one of two locations. Firstly it will try to create /var/lib/crashplan/.identity. If that write operation fails, it will fall back to ~/.crashplan/.identity, which is the crashplan user’s home directory (usually /volume1/homes/crashplan).

    It seems that if both of these files exist with different contents, you get this warning everytime the console starts:
    Logged out by authority. Logged in on another computer.

    …and you have to log in and adopt your existing backup records again. My guess is that those of you with /var/lib/crashplan/.identity present are people who had been running a manual install of CrashPlan, since my package never had write access to this directory until version 007. So the fix for this issue will be to delete this folder if it’s present.

    Reply
    1. rwojo

      I ran into the same issue from manually running the CP engine from the command line (which I don’t do anymore!) while doing testing for the x86 inotify issue.

      Solving it was done by removing one of the .identity files as well, however the two that seem to be tested are /var/services/homes/crashplan/.crashplan/.identity and /var/lib/crashplan/.identity. I don’t see the usage of ~crashplan/.identity in the pecking order in service.log.0.

      Reply
      1. patters Post author

        By ‘~’ I was meaning the daemon user’s home directory (which is /var/services/homes/crashplan).
        Good point about manual startups – I’ll move that logic to the start-stop-status script instead of postinst then, so it deals with that case every startup. Did you see my update about the inotify counts?

  9. Graham Wheeler

    Since the latest update, I find that my backups are running really slowly. I have the network bandwidth set to 1Mbps when “user present” and 2Mbps when “user away”, with 50% CPU allowed. I have 25Mbps symmetrical internet connectivity. But I am only seeing about 180kbps transfer rates now, with my backup completion estimated in about 2 years from now. Is anyone else seeing this?

    Reply
  10. patters Post author

    New version out now. This one’s got Real-Time Backup support for Intel at last, thanks to rwojo. I also unified the scripts – it was getting too tricky to maintain 6 different versions. The upgrade process is way better too. No more entering your password, or re-scanning. See changelog for more details.

    Reply
    1. Michael Maillot (@Mmaillot)

      Hi,

      Nice work. Just a quick remark / advice to other users: I used to backup my whole NAS thanks to this Crashplan Package. The backup went almost complete, except for 2 files. Because of those 2 files, the backup would fail every 15′ and try to backup again in 15′. For a reason I do not know – maybe those files are locked in a certain manner – Crashplan cannot backup aquota.user and aquota.group.

      I simply unchecked those 2 files and now everything is fine.

      Michaël.

      Reply
      1. Richard

        Those 2 files are from the Syno itself. I added them to my exceptions. Otherwise it will try over and over and over and over and over and………

  11. Ingmar Verheij

    Hi,

    Great job! This is way much easier than the installation guides found on the internet!

    I’ve installed CrashPlan Pro (3.2-008, today) and tried connecting from Windows (through an SSL tunnel) but the client (3.8.2010) keeps saying “CrashPlan has been disconnected from the backup engine.”

    The following message is logged in service.log.0

    [05.02.12 21:26:07.576 WARN Sel-UI-R com.code42.nio.net.Connection ] Error building message: Unable to deserialize CommandMessage, IOException in uncompressObject
    [05.02.12 21:26:07.580 WARN Sel-UI-R com.code42.nio.net.Factory ] read() Exception com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null] , Context@10901009[/127.0.0.1:4243->/127.0.0.1:51315], com.code42.nio.net.Factory$ReadListener@1cc7c50, com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
    com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=528883553596342675, closed=false, isLocal=true, lat=2012-05-02T21:26:07:574, lrt=2012-05-02T21:26:07:574, lwt=2012-05-02T21:26:03:764, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:51315, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
    at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:283)
    at com.code42.messaging.nio.MessageConnection.enqueueMessages(MessageConnection.java:171)
    at com.code42.messaging.nio.MessageConnection.addMessage(MessageConnection.java:152)
    at com.code42.messaging.nio.MessageConnection.access$000(MessageConnection.java:51)
    at com.code42.messaging.nio.MessageConnection$MessageBuffer.read(MessageConnection.java:754)
    at com.code42.messaging.nio.MessageConnection.read(MessageConnection.java:680)
    at com.code42.nio.net.Factory$ReadListener.processKeys(Factory.java:769)
    at com.code42.nio.SelectorEngine.run(SelectorEngine.java:142)
    at java.lang.Thread.run(Thread.java:722)
    Caused by: com.code42.exception.DebugRuntimeException: Unable to deserialize CommandMessage, IOException in uncompressObject, CommandMessage[null]
    at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:71)
    at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:274)
    … 8 more
    Caused by: com.code42.io.CompressionIOException: IOException in uncompressObject
    at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:237)
    at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:66)
    … 9 more
    Caused by: java.util.zip.ZipException: Not in GZIP format
    at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
    at java.util.zip.GZIPInputStream.(GZIPInputStream.java:78)
    at java.util.zip.GZIPInputStream.(GZIPInputStream.java:90)
    at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:231)
    … 10 more

    [05.02.12 21:26:07.582 INFO Factory$Notifier-UI0 com.backup42.service.ui.UIController ] UISession Ended after less than a minute – 528883553596342675

    Have you got any clues?
    PS: It’s on a DS212+

    Reply
    1. Ingmar Verheij

      I’ve found the reason why I got the error above. The CrashPlan client installed via your package is 3.2-008 while I’ve got 2010.03.08 installed on my desktop. After upgrading my desktop client to the same version this problem was solved.

      The reason I installed the older version was because this is mandatory for the Dutch storage provider ProBackup (http://crashplan.probackup.nl/). I wrote a blog – http://t.co/2dQLpUg4 – about how to downgrade your package to the version (and configuration) used by them.

      Reply
      1. patters Post author

        A bit further back in the comments I posted the download URLs for the older 2010 version of the PROe package I had made.

  12. Robin

    I cannot see the crashplan package in my list. I can see the Minecraft, Craftbucket, openremote, and the subsonic-backup packages, but no other ones
    I am using a Synolgy DS209+II with DSM 4

    Reply
      1. Sven

        Hi patters,
        I am receiving the same error (CrashPlan has been disconnected from the backup engine) also on a DS412+ with DSM 4.0 2228.
        Clean install, netstat shows:
        DiskStation> netstat -an | grep ‘:424.’
        tcp 0 0 0.0.0.0:4243 0.0.0.0:* LISTEN
        tcp 0 0 127.0.0.1:4243 127.0.0.1:36612 TIME_WAIT

        I am still using the “old” CrashPlanProe-Version 3.8.2010, because the CrashPlanPro-Server isn`t updated to the new 3.2-version.

        I opened a tunnel (the same way I do with every DS I used before).

        The only difference in my installation is the use of the new Java-Package from Your site and the new version from Oracle:
        ejre-1_6_0_32-fcs-b05-linux-i586-headless-05_apr_2012.tar.gz

        service.log.0 shows the following after starting the CrashPlanClient (GUI) on my Mac:

        [05.17.12 02:07:22.171 WARN Sel-UI-R com.code42.nio.net.Connection ] Error building message: Unable to deserialize CommandMessage, IOException in uncompressObject
        [05.17.12 02:07:22.173 WARN Sel-UI-R com.code42.nio.net.Factory ] read() Exception com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null] , Context@22823147[/127.0.0.1:4243->/127.0.0.1:48494], com.code42.nio.net.Factory$ReadListener@1fa5e5e, com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
        com.code42.exception.DebugRuntimeException: buildMessage(): Disconnect! Exception! messageId=31689, session=Session[id=530941240467324793, closed=false, isLocal=true, lat=2012-05-17T02:07:22:170, lrt=2012-05-17T02:07:22:170, lwt=2012-05-17T02:07:20:588, #pending=1, enqueued=false, local=127.0.0.1:4243, remote=127.0.0.1:48494, usingProtoHeaders=false, usingEncryptedHeaders=false], dataBuffer=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], CommandMessage[null]
        at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:283)
        at com.code42.messaging.nio.MessageConnection.enqueueMessages(MessageConnection.java:171)
        at com.code42.messaging.nio.MessageConnection.addMessage(MessageConnection.java:152)
        at com.code42.messaging.nio.MessageConnection.access$000(MessageConnection.java:51)
        at com.code42.messaging.nio.MessageConnection$MessageBuffer.read(MessageConnection.java:754)
        at com.code42.messaging.nio.MessageConnection.read(MessageConnection.java:680)
        at com.code42.nio.net.Factory$ReadListener.processKeys(Factory.java:769)
        at com.code42.nio.SelectorEngine.run(SelectorEngine.java:142)
        at java.lang.Thread.run(Thread.java:662)
        Caused by: com.code42.exception.DebugRuntimeException: Unable to deserialize CommandMessage, IOException in uncompressObject, CommandMessage[null]
        at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:71)
        at com.code42.messaging.nio.MessageConnection.buildMessage(MessageConnection.java:274)
        … 8 more
        Caused by: com.code42.io.CompressionIOException: IOException in uncompressObject
        at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:237)
        at com.code42.messaging.message.RequestMessage.fromBytes(RequestMessage.java:66)
        … 9 more
        Caused by: java.io.IOException: Not in GZIP format
        at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:141)
        at java.util.zip.GZIPInputStream.(GZIPInputStream.java:56)
        at java.util.zip.GZIPInputStream.(GZIPInputStream.java:65)
        at com.code42.io.CompressUtility.uncompressObject(CompressUtility.java:231)
        … 10 more

        [05.17.12 02:07:22.174 INFO Factory$Notifier-UI0 com.backup42.service.ui.UIController ] UISession Ended after less than a minute – 530941240467324793
        [05.17.12 02:07:22.175 INFO Factory$Notifier-UI0 com.backup42.common.command.CliExecutor ] RUN COMMAND: auto.idle
        [05.17.12 02:07:22.175 INFO Factory$Notifier-UI0 backup42.service.backup.BackupController] UI:: AUTO IDLE… lowBandwidth=0 B/s, activeThrottleRate=20

        Thanks in advance

      2. Sven

        patters, You mentioned, You don’t have access to an Intel-based Synology-DS.
        If You would like, I can give You access to a brand new DS412+. It is currently not in use. Please contact me via my mail address (merckenssc@me.com), if this would help You.

        I can`t solve the error (losing connection to the backup engine; it doesn´t matter, if I use a tunnel or a modified client (redirected to the IP of the Diskstation).

        Thanks in advance

      3. Sven

        patters:
        Could it be a problem of the “shown” version?
        the CP-app.log shows:
        CPVERSION = 3.2.1 – 1332824401321
        which is wrong
        I installed 3.8.2010 and this version shows on other installations (older java-package):

        CPVERSION = 1268066820719 (2010-03-08T16:47:00:719+0000)

        Thanks in advance

      4. Sven

        I think reading helps… especially me ;)
        The new CrashPlanPROe-package-installer downloads the CPClient from the Code42-Webpage. Ok… that isn`t a “wanted” feature or better, their should be an option to install the old version (because the 3.2 (new) and 3.8.2010 (old) aren`t compatible.
        patters: how can I install the old version with You package?
        Thanks in advance

  13. msilano

    Hi Patters. Excellent work on this – this will inspire me to a) sign up for Crashplan and b) use your affiliate link. Just one small bit of strangeness:

    I installed on a new 412+. Everything is working, but looking at the process listing once Crashplan is up and running, there are…..count em….68 individual processes running, all with the java launch for Crashplan.

    I like multi-threaded programs, just didn’t expect this to be one of them. Each instance is taking 652m of virtual memory according to top.

    I’ve checked all of the launch scripts; they are working properly. Whatever is happening is within the Crashplan engine.

    Is this normal? If not, any suggestions as to how to proceed?

    Thanks,
    Mike

    Reply
    1. patters Post author

      That doesn’t sound right. There should only be one – it’s not multithreaded. Does this carry on when you reboot?

      Reply
      1. msilano

        Indeed it does.

        I would post the output of the PS command, but it is rather large. There are exactly 64 instances – strange coincidence. Each process looks like this:

        crashpla 649m S N /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPla

        That appears to match the CrashPlanEngine launch for the server. However, there aren’t multiple startups referenced in the engine_output log and the other “instances” aren’t taking CPU time, so I’m not clear as to what is happening.

        Let me know what I can do to help troubleshoot this further.

        -mike

      2. patters Post author

        Hmm strange. Can you try editing /var/packages/CrashPlan/scripts/start-stop-status and remove the LD_PRELOAD bit (the glibc shim) to see if that makes any difference? Had you ever had a manual install of CrashPlan on there?

      3. msilano

        Can’t seem to reply to the latest comment. Weird.

        In any case, the start/stop script appears to be launching once using the shim. Launching without the shim didn’t seem to make any difference.

        There was an incomplete manual install of crashplan from the synology wiki before I found your site. All remnants of that install were removed. Searches of the file system for any launch scripts or old versions show nothing.

        Thanks again for the reply and the assistance.

        -m

      4. patters Post author

        So if you kill them all then start CrashPlan in Package Center, does it immediately spawn 64 processes or do they take a while to turn up?

      5. msilano

        Very weird indeed. I can stop and start the processes using CrashPlanEngine directly; in any case, once restarted, they quickly spawn. Not all at once, but after about 1 minute, we have a full allotment of processes. Trying now to run now by just calling the engine once using the same script from CrashPlanEngine; same results.

        -m

      6. brunchto

        same for me.
        process diseappears as soon as i stop crashplan from the package center. they reappear when i restart it. there’s a root process, the others seem to be child processes

        27847 1 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27850 27847 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27853 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27854 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan
        27856 27850 crashpla S N 645m 21.4 0.0 /volume1/@appstore/java6/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan

  14. patters Post author

    Anyone want to help to try and get drive spin down/sleep mode working with this? I’ve had a look at the QNAP forum (which you need to be a member of just to read) and over there someone thought it had something to do with the constant logging. I moved my log folder to /tmp and symlinked it but the drive didn’t spin down as far as I know, though my testing was pretty cursory.

    Reply
    1. eff_cee

      I thought the lack of sleep was because crashplan continues to open [/volume1/@appstore/CrashPlan/conf/default.service.xml even outside the run between times given ?

      Is this the config file where the the run between times are saved ?

      Maybe CP has to check this file to see if it has been changed ?

      Regardless, it is a pain to have to cron crashplan to get round the issue. Did you get any further on this ?

      Reply
      1. patters Post author

        No, for me it’s kind of a lost cause because Serviio also does this, even with library updates set to manual. Could be a more generic Java problem.
        I guess we could nag CrashPlan support about it, since someone reported that it used to sleep ok. However, they’re pretty consistent in stating that headless operation is not supported.

  15. msilano

    …and the plot thickens….

    The CrashPlanEngine script is indeed launching a single copy of CrashPlan; something within the jar file is triggering the additional launches.

    While this instance was behaving normally, there were another 30 processes spawned. Are we just seeing child processes spawned?

    This is an intel-based ds 412+.

    Thanks again for your help.

    -m

    bash-3.2# ./MSCrashPlanEngine start
    Starting CrashPlan Engine … Using standard startup
    JAVACOMMON /volume1/@appstore/java6/jre/bin/java
    SRV_JAVA_OPTS -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx512M -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0
    FULL_CP /volume1/@appstore/CrashPlan/lib/com.backup42.desktop.jar:/volume1/@appstore/CrashPlan/lang
    TARGETDIR /volume1/@appstore/CrashPlan
    [05.05.12 14:00:05.974 INFO main root ] Locale changed to English
    [05.05.12 14:00:05.976 INFO main root ] *************************************************************
    [05.05.12 14:00:05.977 INFO main root ] *************************************************************
    [05.05.12 14:00:05.977 INFO main root ] STARTED CrashPlanService
    [05.05.12 14:00:05.980 INFO main root ] CPVERSION = 3.2.1 – 1332824401321 (2012-03-27T05:00:01:321+0000)
    [05.05.12 14:00:05.981 INFO main root ] LOCALE = English
    [05.05.12 14:00:05.983 INFO main root ] ARGS = [ ]
    [05.05.12 14:00:05.983 INFO main root ] *************************************************************
    [05.05.12 14:00:06.222 INFO main root ] Adding shutdown hook.
    [05.05.12 14:00:06.226 INFO main root ] BEGIN Copy Custom, waitForCustom=false
    [05.05.12 14:00:06.227 INFO main root ] NOT waiting for custom skin to appear in custom or .Custom
    [05.05.12 14:00:06.227 INFO main root ] No custom skin to copy from null
    [05.05.12 14:00:06.227 INFO main root ] END Copy Custom
    [05.05.12 14:00:06.239 INFO main root ] BEGIN Loading Configuration
    [05.05.12 14:00:06.365 INFO main root ] Loading from default: /volume1/@appstore/CrashPlan/conf/default.service.xml
    md5 Loaded.
    [05.05.12 14:00:06.601 INFO main root ] Loading from my xml file=conf/my.service.xml
    [05.05.12 14:00:06.713 INFO main root ] Loading ServiceConfig, newInstall=false, version=3, configDateMs=null, installVersion=1332824401321
    [05.05.12 14:00:06.714 INFO main root ] OS = Linux
    [05.05.12 14:00:06.926 INFO main root ] AuthorityLocation@29775659[ location=central.crashplan.com:443, hideAddress=false ]
    [05.05.12 14:00:06.929 INFO main root ] END Loading Configuration
    jtux Loaded.
    ./MSCrashPlanEngine: line 8: 25100 Killed /volume1/@appstore/CrashPlan/bin/nice -n 19 $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService

    Reply
    1. patters Post author

      I have just checked against a syno at my work that’s busy doing its first seed backup (ARM, not Intel), and I can confirm that it has only launched a single process. Since I don’t have an Intel machine I don’t think I’m going to be much use in figuring this out I’m afraid.

      Reply
      1. Richard

        Whats about the multiple instances on the Intel syno’s?
        (I’m waiting with the update because of this messages)
        What is the impact on the performance or use of the Syno or Crashplan??

        Hope to hear soon!!

      2. msilano

        I’m leaning towards these being subprocesses as the total memory and CPU usage appears correct for one instance. The file CrashPlanEngine.PID (as per the launch script) contains 1 entry. And Crashplan listens as expected.

        Just my $.02.

      3. patters Post author

        I don’t have an Intel syno so I can’t really help. However, there is no fundamental change to how CrashPlan is started in my newer package version (apart from the Intel glibc shim – which msilano has confirmed doesn’t cause this behaviour). So, is there a chance that this has been happening all along?

      4. Richard

        I did the backup 15 min’s ago.
        No problem what so ever ;-)

        No error messages etc…

        But I don’t have the feeling the realtime scan is working… although I don’t get an error message…..

      5. patters Post author

        You can check whether real-time backup is working by opening the console and then changing or adding a file. You should see the console report it, and schedule it for the next backup in 15mins.

      6. Richard

        I tried that indeed, but it does not report it…
        Will try it again when getting back home.

      7. Richard

        Tried again..

        -Waited till the backup run finished
        -Added 10 documents
        -waited 45 minutes
        -no automatic selection scan :(

        I’ll stick to my hourly schedule… it works

        backup freq: 15min
        verify selection every: 1 hours

      8. Richard

        5 similar errors in thread 21, 25, 26, 32 and 33

        Exception in thread “Thread-21″ java.lang.NullPointerException
        at com.code42.jna.LinuxPlatform.isSymlink(LinuxPlatform.java:271)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:89)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:116)
        at com.code42.jna.inotify.JNAInotifyWorker.depthFirstTraversal(JNAInotifyWorker.java:116)
        at com.code42.jna.inotify.JNAInotifyWorker.run(JNAInotifyWorker.java:52)
        at java.lang.Thread.run(Thread.java:662)

  16. Jim

    I have a DS212+ that i installed this on, and i did the recent update to it the other day, and it does not seem to hold the network settings so if i change it to us my max upload speed it does not go it seems to hang out at the 300 setting.

    Reply
    1. Jim

      I discovered that having it generate thumbnails and do a back up at the same time does not work so well, once i stooped doing the thumbnails the speeds for crash plan leveled out to what i expect them to be.

      Reply
  17. david

    Is there any way to provide better instructions? I just have a new DS212j that I buy only for CrashPlan and I am stuck on the java part. The package installer says that I need to manually download it and place it to the “public” folder but I issue these commands:

    cd /
    find * | grep public

    Only to discover there is no public folder. I enable SMB and figure out how to login again to discover there is no public folder. I would love to create the public directory… but where? /public? /volume1/@tmp/public? It is very hard to follow without a detailed instruction and I hope you understand.

    So if please you can provide better detailed instruction like “place it in /var/tmp/public” that would be great!

    Reply
    1. patters Post author

      Make a new top level folder in DSM’s File Browser and call it ‘public’. That will be shared automatically, so put the downloaded file in there.

      Reply
    2. Jim

      you need to make sure that the Webdav service user under system internal users has access permissions to the public folder as well

      Reply
      1. patters Post author

        Really? Wasn’t a problem for me on a four bay syno at work, though I did add full access permissions for everyone – which is how this folder is defaulted on single bay NAS systems.

  18. David

    Does anyone know if this works with the DS212+ (513MB DDR3), or is 1GB of NAS RAM a firm requirement?

    Reply
    1. Jim

      I am using on a DS212+ and it works great! however make sure that when you are doing your initial backup or large backups to crash plan you are not taxing the CPU with making thumbnails for photo station

      Reply
      1. patters Post author

        Code42′s official requirement for CrashPlan is 512MB, but even that allows for very large backup sets. My package sizes the Java heap size appropriately for the available RAM in an effort to make sure it didn’t attempt any kind of suicidal paging to disk while backing up. Given that most home users are constrained by bandwidth I’m guessing it’s not practical for most people to backup the whole NAS. I’m only protecting around 60GB of mine, for which a Java heap of 192MB is perfectly adequate. However, at work I’m uploading 3TB of data using CrashPlan PROe from a RS411 with the same heap size. In fairness it is made up of mostly very big files though which probably reduces the overhead somewhat (master edits of videos).

      2. David

        Thanks so much — seems like a great home NAS/backup solution; the best combo I’ve seen. I’ll probably be back on this thread soon while setting mine up!

      3. Chris

        I don’t want to backup things off my DS212+ but I do want to backup to it. I would want to have several computers backup to it. Maybe 1-1.5 tb total. Anyone have experience with this? Any performance feedback would be great.

  19. Ulrik

    Just upgraded from DS508 to DS1812 last night, and now want to add Crashplan from you repo. But it fails out during installation with “There was a problem downloading CrashPlan_3.2.1_Linux.tgz from the official download link, which was….(the link is correct)”. What could be wrong?

    Reply
    1. patters Post author

      It just uses wget so maybe there was a temporary glitch with the CrashPlan.com website. I just checked and it works for me now. Is it still failing?

      Reply
      1. Ulrik

        Yes, it is still failing. Is there a log somewhere to give me an idea of what goes wrong? I can download the file manually – also with wget from a SSH connection to the DS1812.

      2. Ulrik

        OK, I got it installed now. The install failed earlier because of a disk expansion. But now I got another problem. I have stopped and restarted the CrashPlan addon on ny Syno, and edited the ServiceHost line in ui.properties, but I cannot connect with my new CrashPlan account.

  20. Rogier

    I’ve installed the Java 7 package on my DS411j in order to be able to use Crashplan. Everything is working fine, however, after some hours, Java is consuming all CPU power. Result is that I can’t connect anymore to the Crashplan installation and the NAS becomes very slow. Currently, the only remedy is to restart the NAS once a day. Does anyone else seems to have the same issues? Does anyone how to solve this? Any help is appreciated!

    Reply
    1. MJ

      I had the same problem on mine. I’m thinking it just doesn’t have enough RAM to handle the client. I plan to upgrade to a DS411 or 411+

      Reply
  21. Matthew

    Hi we are currently trying to set up crashplan pro e on a DS1512+ and we cant seem to get the client installer to find our server on the synology nas box.

    If anyone has gotten this working and could lend some assistance we would gladly pay a consulting fee.

    Mathew@northbaytek.com

    Reply
    1. patters Post author

      The naming is a little confusing. My CrashPlan PROe Synology package is only a PROe *client* (as titled Package Center) – i.e. you’d still need to pay a provider for storage hosting then use a client on a PC to get the syno to connect to that provider. That’s how I’m currently using it, connecting to http://crashplanuk.com. You don’t need to hack any files, you simply enter the connection URL into the GUI and log on when the client first connects to your syno.

      PROe server is what the storage provider would run on their hardware.

      Reply
      1. DJ Forman

        I thought you could have the ProE server running on the Synology box and use that as the storage provider?

      2. patters Post author

        You might be able to, but you would have to buy master keys from CrashPlan which I would expect to be quite expensive given that it’s basically aimed at the datacenter market.

      3. DJ Forman

        Thanks patters, we bought 40 perpetual licenses with 1 year of support. But of course they won’t support this implementation, nor will Synology. Not directly anyway.

        I friend of mine has this same setup deployed on his Synology but he had to get help from an outside source to get it completed. I’m hoping to get him to send me the instructions he follows.

        In fact his implementation is more complicated than mine because he uses 2 Synology boxes, one on each seaboard, that replicate to each other.

        Fingers crossed. If you are into a quick consulting gig to help us get this setup that would be great. Otherwise will stumble along trying to make it work.

      4. patters Post author

        I see. I could be interested, however it’s already 22:38 here in London and I’m only just leaving work! Backup Exec 2012 migration and iSCSI issues…
        If by some miracle I’m feeling up to it in the next few hours I’ll let you know.

        The only problem about outside assistance for CrashPlan implementation is that it will almost certainly fail and need additional work once a new version is released – which is why I made the packages. I guess if it’s not complicated I could end up making a PROe server package. Thing is, I can’t experiment without a master key.

      5. DJ Forman

        Backup Exec issues, bleh. Sorry for your pain. I moved to Acronis and/or StorageCraft and never looked back. I’m open to tomorrow morning your time also. I’ll stay up late on my side. I’ve got several clients in the UK and Germany that I need to work with tonight so I’ll already be up.

        I’m going to try to install the ProE server manually as if the Synology were a “real” Linux server. Supposedly this should work. By the way, I’m not cheap, I’m sure I can make some consulting worth your while.

  22. DJ Forman

    We got the Crashplan ProE headless client installed on a Synology NAS. It looks like it’s listening on 4243. But no matter what I try I can’t get the Crashplan Windows client to connect. It just spits out an error about not being able to connect to port 443 on the IP of the NAS.

    I changed ui.properties, but not sure if I edited it properly. It’s just a single line with fields seperated by the # sign. I entered to additional lines to the bottom to point to the server and port. No luck.

    Could use some help. Happy to pay for some consulting. I need this up and running tonight so I can deploy 40 users. I’m pretty sure it’s just a minor mis-config.

    djforman1(at)yahoo.com

    Reply
    1. patters Post author

      Hmm this comment above isn’t threaded properly so it’s going to be confusing to read…

      As per the posts above, PROe client can’t really be used like this. I would guess that to achieve what you’re aiming to do, you might be able to use the normal CrashPlan package on the Syno then run the normal CrashPlan clients on your 40 computers. Then look at the ‘backup to a friend’ option, taking the friend code from the syno’s CrashPlan instance.

      I would guess that there is an upper limit on the number of friend connections though, and 40 is likely to be on the high side.

      Reply
  23. OjaSapNL

    Hello,

    For everybody that hates that your Synology harddrives don’t hibernate anymore I have created an solution. First I was using the Crontab solution, but that didnt fit. If i have an large backup and it dont fit the time window the backup will not be completed, and if u have an large one your harddrives stay online to long.

    I created an Python script that reads the CrashPlan logfile every 5 minutes. In the script you fill in the time that the CrashPlan service need to be started. Then it starts it, and reads every 5 minutes if its completed. Then It sends an e-mail with the lines of the logfile, and stops the CrashPlan service. I’am currently testing it. If anybody is interested, tell me and I will give u the script.

    OjaSapNL

    Reply
    1. TopL

      Hey OjaSapNL,

      I’m interested. I’m whittling down why my NAS isn’t hibernating and isolated it down to CrashPlan. If you can provide details on how your script works, that’ll be great!

      You can ping me at (please replace the splats accordingly): tl.2012 * xemaps * com

      Thanks!
      TopL

      Reply
  24. Ronald Rademaker

    Hi,

    How can I check the status of my Crashplan in a command shell ? Which location/command is needed to be used?

    Confused as I also Have my old install which give me the error:

    DiskStation> ./crashplan status
    Could not find JAR file /opt/crashplan/bin/../lib/com.backup42.desktop.jar

    The new one using this package can be stopped and started in the Package Center without issue and backup is working properly

    Thx,
    Ronald

    Reply
    1. patters Post author

      From memory, I think the CrashPlan launcher script (which my package’s start-stop-status script invokes) expects you to be in the program folder since it contains relative paths. I don’t think their launcher has a status function so I would suggest you try this:
      cd /volume1/@appstore/CrashPlan
      /var/packages/CrashPlan/scripts/start-stop-status status && echo running || echo stopped

      As you can see, packages themselves are in the @appstore folder, but the metadata from Package Center and the package scripts end up in /var/packages.

      Reply
  25. Darcy

    Fantastic work creating this package! I’ve been running some test backups on a 30-day trial, and it’s been great. Will sign up shortly with your affiliate link :)

    Reply
  26. Andrew Stuckey

    Hey Patters. Thanks so much for your work on this. Hope you can help us out with our setup.
    Will definitely give a donation if you can help us get our backups working ;). There you have that in writing!

    We bought a DS411j as our main data storage and file server. We also bought a Drobo (DAS version) to use as remote backup of the NAS using Crashplan. The plan is to seed the initial backup with the Drobo connected directly to the NAS, then move the Drobo to a friend’s house and continue with incremental backups over the internet.

    So far we’ve managed to install your CP package on the DS411j and have successfully run some test backups to the Drobo while connected directly to the NAS via USB. To do this we’re running the CrashPlan client on an iMac on the same LAN as the NAS by changing the host IP.

    Then in order to test the seeded backup over the network we unplug the Drobo from the NAS and connect it to the iMac. However CrashPlan won’t allow us to select a network drive as the backup destination. In fact, no networked machines or drives even appear in the list. We’re only able to select a local folder on the NAS or the Drobo when connected directly to the NAS.

    Are we going about this the wrong way? Have we missed a step?
    Very disheartening to have come this far only to be stumbling at the last hurdle.

    Any help would be very appreciated.
    cheers

    Reply
    1. patters Post author

      You would need to get the iMac CrashPlan GUI client connecting to the Backup Engine instance running on the iMac (so undo your conf/ui.properties modification). This iMac would then be acting as a separate CrashPlan setup (like the one you want to run at your friend’s house). I haven’t tried this, but I would imagine that you would get the friend code from that setup. Then I think you would plug in (on the iMac) the Drobo you already seeded from the NAS, and in the advanced backup destinations options (I’m not near a CrashPlan GUI to check) you can set the actual folder on disk where the friends backups are saved. I think you just attach the seeded folder that’s on the Drobo (it should be a local drive since it’s DAS and it’s plugged into the iMac). Then I’m guessing that you would once again edit ui.properties to switch the GUI client back to connect to the Synology and add a backup destination using the friend code. Hopefully it should just be aware of the existing seed backup, since the GUID will be recognised. This is all untested though. I personally just pay for CrashPlan+

      Reply
      1. Andrew Stuckey

        Thanks. We considered CrashPlan+ but with > 4TB of data to backup and being in Australia we unfortunately can’t use CrashPlan’s seeding service, which makes it very impractical. To do the initial backup over the net would probably take several months!

        I’ll play around with the CP client on the iMac and see if we can get it to work… stay tuned.

      2. Andrew Stuckey

        Thanks Patters. We seem to have got it working!
        I seeded a small backup, then moved the external drive to a second Mac (on our CP account) and added the backup archive to the Mac through the CP client. It instantly recognised that the backup had come from the NAS and prompted to start synchronising. Great. Lets hope it works over the internet to a friends computer once we’ve finished the initial seed.

        Resuming the seed backup…

        New issue…

        It’s backing up at ridiculously SLOW speeds and I’m having trouble isolating the bottleneck.

        Our setup is the DS411j running 4 x new 3TB Seagate Barracuda 7200rpm drives connected to a Gigabit Apple Airport Extreme. I’ve stopped all packages except CrashPlan and your Java plugin. There are also no Time Machine backups or any other processes running (as far as I’m aware).

        With the backup drive directly connected to the NAS via usb2 it’s transferring data at an agonising slow 6-9 Mbps (a far cry from the theoretical 480 Mbps of usb).

        With the backup drive connected to the Mac (which is then connected to our Gigabit Airport Express by Cat6 cable) we get even slower speeds of around 3-5 Mbps or an average of roughly 600 KB/s looking at the DSM resource monitor.

        So not even 1MB/s which is completely useless and a very far cry from the 29-85 MB/s benchmarks posted on the Synology site http://www.synology.com/products/performance.php?lang=enu.

        I haven’t done any other performance tests at this stage other than via CrashPlan, so I don’t have anything to benchmark it by. But do you have any immediate suggestions? With no other processes running could it be an issue with CrashPlan?

        Could it possibly be an AFP or SMB issue? Although this would not explain the terrible speed when connected directly to the NAS usb port.

        There also appears to be a common speed issue between Synology and Macs as documented here (http://forum.synology.com/enu/viewtopic.php?f=14&t=34172&hilit=SLOW). Maybe this might shed some light on the issue.

        Any help specifically with identifying potential problems running CP on the NAS would be much appreciated.

        cheers
        Andrew

      3. patters Post author

        Try disabling de-dupe and compression in the advanced backup options. For the ARM CPUs I think it’s a big performance hit. I hate to say it, but I think your DS411J won’t have enough RAM for such a large backup set. The default CrashPlan heap is 512MB and for backing up several terabytes, Code42 support have advised several people to use 1024MB. Only the Intel Synology models have this much RAM (Expandable to 3GB I think).

      4. Andrew Stuckey

        Have now tested the external drive using FW800 and unfortunately speeds are still the same dreary 3-5 Mbps, so we can probably rule out the usb as the bottleneck.

        For all test so far NAS CPU has only averaged 40% and RAM 60%.

      5. Andrew Stuckey

        Where are the advanced backup options? In the DSM or CrashPlan? Can’t find them.

        The RAM issue you’ve mentioned with the ARM machines seems stupid. Does this only apply to CrashPlan? And if so why does CP suck so much RAM? or are most other backup applications the same?

        There must be a viable remote backup solution for the Non-Intel models. If not Crashplan then what else?

      6. patters Post author

        CrashPlan GUI -> Settings tab -> Backup -> Advanced settings.

        CrashPlan uses RAM for keeping track of files’ checksums, block hashes, and other metadata I would guess. With several terabytes in your backup set, it’s not difficult to see how even a pretty efficient engine would need a fair amount of RAM. There is a cloud option among the official Synology apps (HiDrive), though I have seen people commenting on its lack of reliability. I have amended the Notes section to draw attention to the RAM issue in case people are shopping for Synos with the express intention of using CrashPlan for large backup sets.

        I agree it is a shame that the ARM models don’t have a DIMM slot, but then they are very cheap embedded systems that are primarily designed to do one thing, serve files – not run applications. That’s a bonus.

      7. Andrew Stuckey

        Meh… the Advanced settings are disabled on the Free plans. Seems like a CP+ feature. oh well. But you’re right about the RAM issue, as soon as I launch CP my whole system chokes. “they are primarily designed to do one thing, serve files… not run applications.” Wish I knew this before I ordered!

        So what I’m thinking now is to buy an Syno Intel NAS with more RAM, sell the brand new Drobo and keep the DS411j as the offsite backup machine. Which model Syno would you recommend buying? It’s not clear from the specs on the Synology site which models are Intel vs ARM. We’ll need something with enough grunt to run 3-4 applications simultaneously including TimeMachine and offsite backups without compromising the NAS performance. Clearly the DS411j is not up to the job which is disappointing. Will it be fine as a backup unit?

      8. Andrew Stuckey

        Another thought…when did Synology start using Intel processors? If we don’t want to spend a heap, would be cost effective to buy a 2010-11 model with the power we need? Happy to consider older models.

  27. Samuel Manso (@samukas_m)

    Hello patters! Recently I had a problem with my crashplan installation not backing up and I was told by crashplan to:
    “2. Edit the below line in /usr/local/crashplan/bin/run.conf
    3. Find this line (near SRV_JAVA_OPTS): -Xmx512m
    4. Edit to something larger such as 640, 768, 896, or 1024. E.g.: -Xmx1024m”

    I did that, and I also changed the value set in “/volume1/@appstore/CrashPlan/syno_package.vars” as specified in your post.

    It seems to have worked, but I’m wondering if it was really needed to change the value in both, or if the the “vars” file would be enough.

    Anyway… a little donation coming your way (wish I could give more)… but it’s well deserved! Thanks to your package I’m backing up 3TB online to crashplan.

    Reply
    1. patters Post author

      I’m assuming you do actually have more than 1024MB of RAM in your syno? If not then the performance may get pretty bad as it does loads of paging to disk. Are you running on Intel?
      Oh, and you only need to change the value in syno_package.vars – that value will override the one in run.conf. This will setting survive a package version upgrade.

      Reply
      1. Samuel Manso (@samukas_m)

        I switched the value to 768 at the moment just to try it out (also turned off all my other packages besides crashplan).
        I’m running an Intel (DS1511+) with only 1GB but I already bought an extra 2GBs, so I’m thinking I’ll give either 1 or 1.5GB to crashplan and the rest to the system. What do you think? Also, for example, to give 1.5GB, should I write 1536 on the the file? I’m thinking that’s the right number.

        Glad to know that I only need to change the value in one file, and that it will survive upgrades :)

      2. Samuel Manso (@samukas_m)

        Definitely worth the upgrade! :D
        I’m now running with 3GB of RAM, changed the value to 1536 on syno_package.vars and at this exact time, “java” is using 900MB of resources. Of course 1GB was not enough :)

      3. Charlie

        Can someone please advise on how to change the values in the .vars file? When I do a vi to it, it appears to be a blank file. I am no command-line whiz, but can “eventually” get around. Just don’t know how to add/ edit this line because I am restarting the service about 3x per day (initial 2.5tb backup on DS1010+ w/ 3GB ram)
        Thanks,
        Charlie-

      4. patters Post author

        It shouldn’t be empty. Are you logged in as ‘root’? It has the same password as your admin account. That file should look something like this:
        ~$ cat /volume1/@appstore/CrashPlan/syno_package.vars
        #uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)
        #you probably only want more than the recommended 512M if you're backing up extremely large volumes of files
        #USR_MAX_HEAP=512M

        LIBFFI_SYMLINK=YES
        MANIFEST_PATH_SET=True
        ~$

      5. Charlie

        Hi Patters,
        I tried ssh and telnet as root and when I do a vi to that file it is empty. I quit without saving and tried to do a cat as you show in your reply and I get a “no file exists…”.
        I know it is running and the JRE6 is installed (although it says stopped with no way to start it). Is something else improperly installed perhaps or could it be in another path?
        If this helps, I was running CP+ then I uninstalled that service and installed the CPpro version. I upgraded my CP account to try and get more throughput. It seems to be better (when it runs).
        Thanks,
        Charlie-

      6. patters Post author

        Have you tried creating that file by running:
        echo USR_MAX_HEAP=768M > /volume1/@appstore/CrashPlan/syno_package.vars

        If that doesn’t work, perhaps your NAS isn’t using /volume1 for its appstore. Try:
        ls /

      7. Charlie

        Patters,
        Well, that yields a directory doesn’t exist:

        DiskStation> echo USR_MAX_HEAP=768M > /volume1/@appstore/CrashPlan/syno_package.
        vars
        -ash: can’t create /volume1/@appstore/CrashPlan/syno_package.vars: nonexistent directory
        DiskStation> ls /
        bin initrd lost+found sbin var
        dev lib mnt sys var.defaults
        etc lib64 proc tmp volume1
        etc.defaults linuxrc root usr
        DiskStation> cd ..
        DiskStation> ls
        bin initrd lost+found sbin var
        dev lib mnt sys var.defaults
        etc lib64 proc tmp volume1
        etc.defaults linuxrc root usr
        DiskStation> cd /volume1
        DiskStation> ls
        @afpd.core @postfix DataFiles aquota.user music
        @appstore @spool Documents crashplan photo
        @autoupdate @tmp Photos downloads public
        @database ATV1 Time Machine homes video
        @eaDir ATV2 aquota.group iTunes
        DiskStation>

        But, I cannot cd to @appstore – it says not found, so I don’t know if the CrashPlan directory is beneath it.

        Thanks,
        Charlie-

      8. cfpsystems

        FIGURED IT OUT!!
        Had a buddy look at it and (disclaimer that I am not a command line wiz), found that I was trying to change directory with a “/” in front on @appstore.
        Once in, found that the file I needed to update was under the directory CrashplanPro. Heap size adjusted and enabled – will report back after its run a while.
        Thanks,
        Charlie-

      9. patters Post author

        Glad you got it figured out. I forgot that you had mentioned it was CP Pro, and I have been quite slow to get back to you – been very busy in real life.

  28. Adam

    Great package, thanks for putting it together! I’ve run across an issue where after uninstalling the package from my DS1511 using package center, the package is no longer listed in the repository for installation. Any ideas how to make it reappear?! (This is not the window size bug in DSM 3.2, I only see three packages to install)

    Reply
    1. patters Post author

      Sorry, I had just updated the package and my NAS went a bit weird earlier on (no Java apps would start properly). I took the new packages down just in case. However removing Java, rebooting and re-installing Java fixed it so they’re back on the repo now.

      Reply
  29. Scott

    Thanks for the package, I was looking at going through your link to try out CrashPlan Pro w/t my 1812+ but when i followed your instructions to add your package to the package center but the CP Server is showing version 3.2? Is this upgradeable to the new v4.0?

    Reply
      1. Scott

        Sorry, maybe I was mixing up a PROe e-mail I got a work, 3.2 looks like the current server version for the ‘business’ hosted version I was looking to install on my synology at home.

  30. Pingback: Ingmar Verheij – The dutch IT guy » Backup Synology to CrashPlan Pro (on Dutch server at Pro Backup) » Ingmar Verheij - The dutch IT guy

  31. Jemima

    Hi Patters – This is great, and I hope I can get it to work. I’ve installed Java 7 per your instructions, but the Package still gives me a ‘Java is not installed or correctly configured’ error. I’ve tried rebooting the NAS, but still no joy. I’m running as DS209. Can you offer any pointers?

    Reply
    1. Jemima

      Some more info… I am not convinced Java is running correctly – I followed the instructions and the package install completed, but I have the following in the log window of the package:

      /var/packages/java7/scripts/postinst: line 28: java: not found

      Systems installed locales:
      C
      en_US.utf8
      POSIX

      JAVA_HOME=/volume1/@appstore/java7/jre
      TZ=Europe/Brussels

      Reply
    2. Jemima

      Yup, forgive me. Java 7 not working for some reason, but I’ve now installed Java 6 without issue and package now installs :-)

      Reply
  32. John K

    So my DS411J clearly isn’t up to the task of running CrashPlan. I’m curious what models other people have had success with ( I will probably need to eventually handle at least 2TB+ of data). I am looking at the DS411+II which comes with 1 gig of RAM, it’s $200 more to step up to a model that has an extra RAM slot and more than I really want to put into this, just for my personal needs.

    Reply
    1. Joe

      Hi John,

      I’m happily running Crashplan on a 6TB DS411J. I’ve had to tweak the memory usage (run.conf) to 1024M. As you know, the DS411J has only 128M of physical memory so there’s a performance hit with paging.

      Reply
  33. Pingback: Anonymous

  34. Pingback: Synology DiskStation und CrashPlan | Alexander Benker

  35. Pingback: Confluence: Knowledge Base

  36. Ulrik

    Problem: CrashPlan package indicated an update was available, and I let it upgrade. It failed when downloading the installer from Code42 website, and the result was, that the package was now removed! So I downloaded it manually and placed the installer in the \public share and then installed the package again. This time it installed with success, BUT when I then connected the CrashPlan client, the Synology was listed as a new CrashPlan engine (with green indicator) and the old engine (with same name) had a grey indicator button. Obviously my CrashPlan license was also gone. It now says I will loose all of my backups, if I transfer the license to the new CrashPlan engine… what to do???

    Reply
    1. patters Post author

      You will be able to ‘adopt’ the old computer record which will recover the licence and the existing synced backup data. Look at the link about adoption in the notes section of my blog post.

      Reply
  37. Chris

    Can people please post what Syno they have, how much data they are backing up and if they would recommend that setup?

    I am currently trying to decide between the ds212 and the ds212+ for my house. I have the ds1511+ running proe at work and love it. Thanks patters!

    Reply
    1. DS411j

      I have a DS411j and I would not recommend it for large backup sets. I have 1.5TB in about 200K files. It is working thanks to the amazing work by patters, but I have to acknowledge that it is a bit slow. First it is taking about 2 days to scan the files (and rescan happens on a regular basis). Second the upload speed is about 5 to 6GB/day, while it was about 10 to 12GB/day on my computer.

      I guess it would work very well for small backup sets (<100GB), but for large sets I would definitely recommend a more robust version (the xxx+ series)

      Reply
    2. Dave

      I have a DS110J and am backing up about 30Gb without any problems. It is quite slow but I’m guess that is crashplan related? Uploading at about 300kb/s – 1000Mb/s on a 10Mb/s upload connection. This script is awesome, I bought my NAS for £80, 2Tb HDD for £85 and 4 year Crashplan subscription for £90, all in all this is just what I need to backup my critical data. Not sure it would be so great for backing up Tb’s of data but for the majority it’s perfect.

      Thanks Patters.

      Reply
    3. Ulrik

      I have a DS1812, and trying to backup almost 8TB of data. As the transfer in average is about 0,5Mbit/sec it will take 5-7 years before the backup is finished. Maybe it is time for me to find another backup solution…. ;-)

      Reply
  38. kloveland

    I just purchased a 1511+ and attempted to run the package. Everything looks like it installed, but I cannot connect to it with the desktop client.

    The only two clues I have are:

    1) When I run netstat I receive the following results.

    DiskStation> netstat -an | grep ‘:424.’
    tcp 0 0 127.0.0.1:4243 0.0.0.0:* LISTEN

    2) When I look at the engine_error.log in /opt/crashplan/log, I see the following:

    java.lang.UnsatisfiedLinkError: /opt/crashplan/libmd564.so: /opt/crashplan/libmd564.so: ELF file OS ABI invalid
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(Unknown Source)

    Both of these don’t seem right, but I don’t know what to investigate from here. Any hints?

    Reply
    1. patters Post author

      No one reported that error yet, but that doesn’t necessarily mean it they’re not getting it. I don’t have an Intel Synology so I can’t troubleshoot that myself I’m afraid. From what I remember when CrashPlan 3.2 first introduced libmd5 it ought to fall back to using the Java MD5 function if it’s not present so CrashPlan should still work for you.

      Reply
      1. kloveland

        Your package says I need to stop and restart the service, but I needed to reboot the box. Initially I had tried to install your package and when it did not work, I tried the manual steps. This fouled things up further. I tried uninstalling everything and re-installing, but no avail. Since this was a new DiskStation, I reinstalled the OS and ONLY installed your package. Restarting the service still did not solve it, but I restarted the box and it ran great.

        For whatever reason, it did not seem to run the post-install script when I simply stopped and started the service.

  39. pitcher

    I am using this great package on a DS209. Nice work!
    I have one question.
    My diskstation is segmented with security groups.
    When i want to select the folders to backup some folders are not visible due the security-settings.
    When i add the “group” users or add “others” read/write permissions the folder is visible to backup.
    Which specific user needs to have permissions on this folder?

    How to do this?

    Thx

    Reply
    1. patters Post author

      I’m not sure – on mine the crashplan user seems to have read access to everything, because it’s browsing stuff on the local filesystem, not via network shares. It seems to be the way the security is set up on Synology.

      Reply
  40. pitcher

    Hello,
    when running livedrive from a computer there will be a log-file for changed files in the following directory : c:\ProgramData\CrashPlan\log\backup_files.log.0

    Is there a logfile available on the synology and where?

    Thx

    Reply
    1. patters Post author

      The log file I present in the Package Center UI is the history log. You can find all the other logs in /volume1/@appstore/CrashPlan/log.

      Reply
  41. h-c

    Hello!

    I love your great package on my DS212+ !!! :)
    But since a few days it uses 100% of the CPU (although I configured 30%) und restarts very often (every 5-30 minutes) …
    I didn’t change anything … so I can’t explain why this happens …
    Any ideas?

    Thank you very much!

    Reply
      1. h-c

        my selection ist ca. 850 GB. And I uploaded 836,9 GB allready without problems.
        (The description above says that 500 MB of RAM are ok for 2 TB-Backups)
        I reinstalled Crashplan on my DiskStation yesterday and connected to my Backup … but that did not work …

      2. h-c

        And this reinstall seemed to have deleted my 836,9 GBs … :(((
        And now he says “initial backup not complete” / “Space used: 0MB”

      3. patters Post author

        You upgraded from what? A previous version of my package? Or a manual install? You should have been able to adopt the old backup records regardless. If it wiped your data, you must have answered the first question in the CrashPlan GUI wrong (the “Is this a new computer” one).

  42. Ryan

    I just installed Java embedded onto my own personal 1511+ after installing it on a client and setting up their CrashPlan Pro E client successfully however I cannot get java to even start on my 1511+. Java is version 1.6.0_32-009.

    Reply
      1. Ryan

        Okay, fair enough. I missed that part on the 1512+ I set up for the client. However, I cannot get CrashPlan ProE to start on my 1511+ and it is running on my client’s 1512+. It just says “Stopped” and when I click start nothing seems to happen. I cannot connect to it remotely either.

      2. Ryan

        More info here, it appears the CP ProE installer is having an issue with the start-stop-status script at the end of the install. I don’t get a log folder in the crashplan pro E app when I click on More info and then log. btw, this is running v010

      3. Ryan

        Even more info, I completely reformatted the 1511+ back to factory defaults and still get the same issue with only Java and CrashPlan ProE client installed.

      4. MD Sharma (@drmdsharma)

        Patters, Kudos for this work mate. On a DS1512+ Java SE for embedded v6 installs fine as per your instructions and the package install for 3.2.1-010 executes without errors however the CrashPlanEngine won’t start at all.

        Here is an ls of /volume1/@appstore/CrashPlanPROe/

        drwxr-xr-x 3 crashpla root 4096 Jul 7 17:14 .
        drwxr-xr-x 6 root root 4096 Jul 7 17:14 ..
        drwxr-xr-x 2 crashpla root 4096 Jul 7 17:14 bin
        -rw-r–r– 1 crashpla root 239 Jul 7 17:14 install.vars
        -rw-rw-rw- 1 crashpla root 5702 Apr 29 12:30 lib
        -rw-r–r– 1 crashpla root 217 Jul 7 17:14 syno_package.vars

        going into bin and executing the engine gives this error message:
        ./CrashPlanEngine
        Could not find JAR file ./../lib/com.backup42.desktop.jar

        and clearly the JAR file does not really exist as the lib entry in the earlier folder is a file not a folder in itself..

        any tips?

  43. MD Sharma (@drmdsharma)

    Yo Patters buddy..

    RE my earlier message.. did a bit more testing.. turns out that the cpi file not unzipping is a big part of this puzzle.. if I unzip it manually then the CrashPlanEngine can be started manually.. and it even connects to the backup service etc.. but the joy does not last for long.. the “package” cannot be started or stopped from the synology GUI controls.. something to do with the start-stop script perhaps..

    anyway.. can you help fiddle with this buggy install for the PROe client install? the regular and PRO installs work fine..

    Reply
    1. patters Post author

      Code 42 had incremented the version of the CPI package inside (from 3.2.1 to 3.2.1.2) since I had last released a syno version. I have amended the package script to use a wildcard on the CPI archive name now so this sort of thing won’t cause a disruption in future.

      Reply
      1. Ryan

        Just wanted to let you know that the 011 version seems to have fixed my issue above as well. Will donate in a few. Thanks for your time! Your support is much better than Code 42′s own support.

  44. Joe

    Great work. Quick question. After you install the package, can you delete the installer file from public? Or do you have to leave it there?

    Thanks

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s