Author Archives: patters

Bliss album art manager package for Synology NAS

bliss-UI

 

Bliss is a Java application written by Dan Gravell which can manage file names, tags and album art for your music collection and which can also enforce their consistency. It is designed to be left running once installed so that albums you add later will have these same policies applied to them automatically. It supports a wide range of music formats, and effortlessly deals with very large collections – apparently even ones containing quite obscure recordings. My own collection didn’t really put this to the test, since it doesn’t contain bootlegs, live sets or rarities plus I had also already obtained cover art for it (from back when CoverFlow first graced the screen of my iPhone 2G).

I could see from referrals to this blog that people were asking Dan for a Synology package which he didn’t have the time to investigate, and so I thought that it would make an interesting little project, especially since a NAS is the ideal device to run Bliss on. Although there was already a Howto post on the Synology forums, that guide only really covered getting the basic functionality of Bliss up and running – it was missing the best bits: the filesystem watching and audio fingerprinting features. These depend on natively compiled binaries, which Bliss doesn’t include for ARM or PowerPC CPUs. Getting these working provided precisely the sort of challenge I like. Not only were they difficult to compile, but getting them integrated into the various OSGi ‘bundles’ that make up Bliss was quite involved too.

Bliss uses an open source library called Chromaprint, itself part of the wider Acoustid project. The aim is to scan an audio file, to produce a fingerprint of the sound, and then to compare this against an open online database such as MusicBrainz.org to identify music regardless of compression codec used. Its author Lukáš Lalinský explains how it works.

 

Synology Package Installation

  • In Synology DSM’s Package Center, click Settings and add my package repository:
    Add Package Repository
  • The repository will push its certificate automatically to the NAS, which is used to validate package integrity. Set the Trust Level to Synology Inc. and trusted publishers:
    Trust Level
  • Since Bliss is a Java application, you will need to install one of my Java SE Embedded packages first (Java 7 or 8) if you have not already done so. Read the instructions on that page carefully too.
  • Now browse the Community section in Package Center to install Bliss:
    Community-packages
    The repository only displays packages which are compatible with your specific model of NAS. If you don’t see Bliss in the list, then either your NAS model or your DSM version are not supported at this time. DSM 5.0 is the minimum supported version for this package, though you will need DSM 6.0 or later for audio fingerprinting support.
  • When the Bliss package is running you can manage it using the icon in the main DSM application menu using the button in the top left corner:
    bliss-webui
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology 3rd Party Developer Guide.

installer.sh

#!/bin/sh

#--------BLISS installer script
#--------package maintained at pcloadletter.co.uk


DOWNLOAD_URL="`wget -qO- http://www.blisshq.com/app/latest-linux-version`"
DOWNLOAD_FILE="`echo ${DOWNLOAD_URL} | sed -r "s%^.*/(.*)%\1%"`"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && [ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && SYNO_CPU_ARCH="x64"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
[ "${SYNOPKG_DSM_ARCH}" == "armada375" ] && SYNO_CPU_ARCH="armv7l"
[ "${SYNOPKG_DSM_ARCH}" == "armada38x" ] && SYNO_CPU_ARCH="armhfneon"
[ "${SYNOPKG_DSM_ARCH}" == "comcerto2k" ] && SYNO_CPU_ARCH="armhfneon"
[ "${SYNOPKG_DSM_ARCH}" == "alpine" ] && SYNO_CPU_ARCH="armhfneon"
[ "${SYNOPKG_DSM_ARCH}" == "alpine4k" ] && SYNO_CPU_ARCH="armhfneon"
[ "${SYNOPKG_DSM_ARCH}" == "monaco" ] && SYNO_CPU_ARCH="armhfneon"
[ ${SYNOPKG_DSM_VERSION_MAJOR} -ge 6 ] && NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/bliss-native-${SYNO_CPU_ARCH}.tar.xz"
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
#'ua' prefix means wget user-agent will be customized
INSTALL_FILES="ua${DOWNLOAD_URL} ${NATIVE_BINS_URL}"
PID_FILE="${SYNOPKG_PKGDEST}/bliss.pid"
TEMP_FOLDER="`find / -maxdepth 2 -path '/volume?/@tmp' | head -n 1`"
APP_TEMP="${TEMP_FOLDER}/${SYNOPKG_PKGNAME}"
source /etc/profile


pre_checks ()
{
  if [ -z ${JAVA_HOME} ]; then
    echo "Java is not installed or not properly configured. JAVA_HOME is not defined. " >> $SYNOPKG_TEMP_LOGFILE
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
    exit 1
  fi

  if [ ! -f ${JAVA_HOME}/bin/java ]; then
    echo "Java is not installed or not properly configured. The Java binary could not be located. " >> $SYNOPKG_TEMP_LOGFILE
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5" >> $SYNOPKG_TEMP_LOGFILE
    exit 1
  fi

  JAVA_VER=`java -version 2>&1 | sed -r "/^.* version/!d;s/^.* version \"[0-9]\.([0-9]).*$/\1/"`
  if [ ${JAVA_VER} -lt 7 ]; then
    echo "This version of Bliss requires Java 7 or newer. Please update your Java package. " >> $SYNOPKG_TEMP_LOGFILE
    exit 1
  fi

  if [ ${SYNOPKG_DSM_VERSION_MAJOR} -lt 6 ]; then
    echo "Please note that native binary support for song identification via audio fingerprinting requires DSM 6.0" >> $SYNOPKG_TEMP_LOGFILE
  fi
}


preinst ()
{
  pre_checks
  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    #this will allow blisshq.com to track the number of downloads from Synology users
    WGET_URL=`echo ${WGET_URL} | sed -e "s/^ua/--user-agent=Synology --referer=http:\/\/pcloadletter.co.uk\/2012\/09\/17\/bliss-package-for-synology /"`
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, " >> $SYNOPKG_TEMP_LOGFILE
        echo "which was \"${WGET_URL}\" " >> $SYNOPKG_TEMP_LOGFILE
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. " >> $SYNOPKG_TEMP_LOGFILE
        exit 1
      fi
    fi
  done
 
  exit 0
}


postinst ()
{
  #run the installer
  cd ${TEMP_FOLDER}
  echo "INSTALL_PATH=${SYNOPKG_PKGDEST}" > ${TEMP_FOLDER}/bliss-synology.properties
  java -jar ${TEMP_FOLDER}/${DOWNLOAD_FILE} -options ${TEMP_FOLDER}/bliss-synology.properties > /dev/null && rm ${TEMP_FOLDER}/${DOWNLOAD_FILE}
  rm ${TEMP_FOLDER}/bliss-synology.properties
  sed -i "s%^#!/bin/bash%#!/bin/sh%" ${SYNOPKG_PKGDEST}/bin/bliss.sh
  
  #stow jar files containing Synology versions of native code
  if [ -f ${TEMP_FOLDER}/${NATIVE_BINS_FILE} ]; then
    mkdir ${SYNOPKG_PKGDEST}/syno-native
    cd ${SYNOPKG_PKGDEST}/syno-native
    tar xJf ${TEMP_FOLDER}/${NATIVE_BINS_FILE}
  fi
  #record the CPU architecture
  echo "${SYNO_CPU_ARCH}" > syno_cpu_arch.txt

  #make changes to Bliss launcher script so that pid file is created for Java process
  sed -r -i "s%^(exec .*)$%\1 > ${SYNOPKG_PKGDEST}/bliss.out 2>\&1 \&%" ${SYNOPKG_PKGDEST}/bin/bliss.sh
  echo "echo \$! > ${PID_FILE}" >> ${SYNOPKG_PKGDEST}/bin/bliss.sh

  #set some additional system properties (temp folder, prefs sync interval)
  EXTRA_OPTS="-Djava.io.tmpdir=${TEMP_FOLDER} -Djava.util.prefs.syncInterval=86400 -Djava.net.preferIPv4Stack=true"
  sed -r -i "s%-splash:bliss-splash.png%${EXTRA_OPTS}%" ${SYNOPKG_PKGDEST}/bin/bliss.sh
  sed -r -i "s%-XX:HeapDumpPath=/tmp%-XX:HeapDumpPath=${TEMP_FOLDER}%" ${SYNOPKG_PKGDEST}/bin/bliss.sh

  #create log file to allow package start errors to be captured
  [ -e ${SYNOPKG_PKGDEST}/bliss.out ] || touch ${SYNOPKG_PKGDEST}/bliss.out

  #add firewall config
  /usr/syno/bin/servicetool --install-configure-file --package /var/packages/${SYNOPKG_PKGNAME}/scripts/${SYNOPKG_PKGNAME}.sc > /dev/null

  exit 0
}


preuninst ()
{
  `dirname $0`/stop-start-status stop

  exit 0
}


postuninst ()
{
  #clean up temp
  [ -d ${TEMP_FOLDER}/Bliss ] && rm -rf ${TEMP_FOLDER}/Bliss

  #remove firewall config
  if [ "${SYNOPKG_PKG_STATUS}" == "UNINSTALL" ]; then
    /usr/syno/bin/servicetool --remove-configure-file --package ${SYNOPKG_PKGNAME}.sc > /dev/null
  fi

  exit 0
}


preupgrade ()
{
  `dirname $0`/stop-start-status stop
  pre_checks
  
  exit 0
}


postupgrade ()
{
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------BLISS start-stop-status script
#--------package maintained at pcloadletter.co.uk


PKG_FOLDER="`dirname $0 | cut -f1-4 -d'/'`"
ENGINE_SCRIPT="${PKG_FOLDER}/target/bin/bliss.sh"
DNAME="`dirname $0 | cut -f4 -d'/'`"
PID_FILE="${PKG_FOLDER}/target/bliss.pid"
DLOG="${PKG_FOLDER}/target/bliss.out"
SYNO_CPU_ARCH="`cat ${PKG_FOLDER}/target/syno-native/syno_cpu_arch.txt`"
source /etc/profile
source /root/.profile


start_daemon ()
{
  #update the package version number in case of an in-app update
  BLISS_BUNDLE_DIR="`grep -r --include='*.info' com.elsten.bliss.bundle ${PKG_FOLDER}/target/felix-cache/ | cut -f1 -d':'`"
  BLISS_BUNDLE_DIR="`dirname ${BLISS_BUNDLE_DIR}`"
  BLISS_VERSION=`grep "version" ${PKG_FOLDER}/INFO | sed -r "s/^.*([0-9]{8}).*$/\1/"`
  if [ -d ${BLISS_BUNDLE_DIR} ]; then
    find ${BLISS_BUNDLE_DIR} -name *.jar > /tmp/bliss-v-check.txt
    while IFS="" read -r FILE_TO_PARSE; do
      if [ -e ${FILE_TO_PARSE} ]; then
        #no unzip command in DSM 6.0
        if [ -e /usr/bin/7z ]; then
          DETECTED_VERSION=`7z e -so ${FILE_TO_PARSE} META-INF/MANIFEST.MF 2> /dev/null | grep Bundle-Version | cut -f4 -d'.' |  cut -c1-8`
        else
          DETECTED_VERSION=`unzip -p ${FILE_TO_PARSE} META-INF/MANIFEST.MF | grep Bundle-Version | cut -f4 -d'.' |  cut -c1-8`
        fi
      fi
      if [ ${DETECTED_VERSION} -gt ${BLISS_VERSION} ]; then
        BLISS_VERSION=${DETECTED_VERSION}
      fi
    done < /tmp/bliss-v-check.txt
  fi
  rm /tmp/bliss-v-check.txt
  sed -r -i "s/^version=\"[0-9]{8}/version=\"${BLISS_VERSION}/" /var/packages/Bliss/INFO
  
  #update the CPU-specific repository customizations (in case of an in-app update)
  #catch both armv5te and armv7l
  if [ "${SYNO_CPU_ARCH}" != "${SYNO_CPU_ARCH/arm/}" ]; then
    sed -i "s/policy\.tag\.auto\.linux\.x86/policy\.tag\.auto\.linux\.ARM_le/g" ${PKG_FOLDER}/target/bliss-bundle/repository.xml
  fi
  if [ "${SYNO_CPU_ARCH}" == "ppc" ]; then
    sed -i "s/policy\.tag\.auto\.linux\.x86/policy\.tag\.auto\.linux\.PowerPC/g" ${PKG_FOLDER}/target/bliss-bundle/repository.xml
  fi
 
  #overwrite native lib bundles with syno versions (in case of an in-app update)
  if [ -e ${PKG_FOLDER}/target/syno-native/ ]; then
    cp ${PKG_FOLDER}/target/syno-native/* ${PKG_FOLDER}/target/bliss-bundle
  fi

  cd ${PKG_FOLDER}
  ${ENGINE_SCRIPT} > /dev/null 2>> ${DLOG}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] || touch ${PKG_FOLDER}/enabled
  fi
}

    
stop_daemon ()
{
  echo "Stopping ${DNAME}" >> ${DLOG}
  kill `cat ${PID_FILE}`
  wait_for_status 1 20 || kill -9 `cat ${PID_FILE}`
  rm -f ${PID_FILE}
  if [ -z ${SYNOPKG_PKGDEST} ]; then
    #script was manually invoked, need this to show status change in Package Center
    [ -e ${PKG_FOLDER}/enabled ] && rm ${PKG_FOLDER}/enabled
  fi
}


daemon_status ()
{
  if [ -f ${PID_FILE} ] && kill -0 `cat ${PID_FILE}` > /dev/null 2>&1; then
    return
  fi
  rm -f ${PID_FILE}
  return 1
}


wait_for_status ()
{
  counter=$2
  while [ ${counter} -gt 0 ]; do
    daemon_status
    [ $? -eq $1 ] && return
    let counter=counter-1
    sleep 1
  done
  return 1
}


case $1 in
  start)
    if daemon_status; then
      echo ${DNAME} is already running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo Starting ${DNAME} ...
      start_daemon
      exit $?
    fi
  ;;

  stop)
    if daemon_status; then
      echo Stopping ${DNAME} ...
      stop_daemon
      exit $?
    else
      echo ${DNAME} is not running
      exit 0
    fi
  ;;

  restart)
    stop_daemon
    start_daemon
    exit $?
  ;;

  status)
    if daemon_status; then
      echo ${DNAME} is running with PID `cat ${PID_FILE}`
      exit 0
    else
      echo ${DNAME} is not running
      exit 1
    fi
  ;;

  log)
    echo "${DLOG}"
    exit 0
  ;;

  *)
    echo "Usage: $0 {start|stop|status|restart}" >&2
    exit 1
  ;;

esac
 

Changelog:

  • 20160606-0011 Substantial overhaul for DSM 6.0, incorporating many enhancements developed for other packages, updated to Bliss version 20160606, DSM 6.0 newer is now required for audio track fingerprinting (fpcalc is compiled to depend on ffmpeg 2.7.1), added support for several newer Synology products, improved accuracy of temp folder detection, in-app updating should also be fixed
  • 20150522-0010 Substantial re-write (hence the long delay):
    Updated to Bliss version 20150522
    DSM 5.0 newer is now required (fpcalc is compiled to depend on FFmpeg 2.0.2)
    Now that Intel systems running DSM 5.0+ use a newer glibc, replacement Intel binaries are no longer needed
    Added support for Mindspeed Comcerto 2000 CPU in DS414j
    Added support for Intel Atom C2538 (avoton) CPU in various models
    Added support for ppc853x CPU in older PowerPC models
    Added support for Marvell Armada 375 CPU in DS215j
    Added support for Intel Evansport CPU in DS214Play and DS415Play
    Switched to using root account – no more adding account permissions, package upgrades will no longer break this
    DSM Firewall application definition added
    Tested with DSM Task Scheduler to allow package to start/stop at certain times of day, saving RAM when not needed
    Daemon init script now uses a proper PID file instead of the unreliable method of using grep on the output of ps
    Daemon init script can be run from the command line
    Switched to .tar.xz compression for native binaries to reduce web hosting storage footprint
    Improved accuracy of temp folder detection
    Package is now signed with repository private key
    User Agent customization while downloading Bliss package from blisshq.com to allow download stats gathering
  • 20130213-0009 Updated to Bliss 20130213, and will correctly report version in Package Center after an in-app update
  • 20130131-0008 Updated to Bliss 20130131
  • 20121112-0007 Fixes for DSM 4.2
  • 20121112-006 Updated to Bliss 20121112
  • 20121019-005 Updated to Bliss 20121019
  • 20121002-004 Updated to Bliss 20121002
  • 20120830-003 Added support for Freescale QorIQ PowerPC CPUs used in some Synology x13 series products, PowerPC processors in previous Synology generations with older glibc versions are not supported
  • 20120830-002 Hopefully fixed Java prefs polling issue that prevented NAS hibernation
  • 20120830-001 initial public release

 

Build Notes

Chromaprint uses some complex maths functions that FFmpeg can provide (specifically Fourier Transform), and FFmpeg’s shared libraries are already included with Synology DSM. Building Chromaprint linked to those existing libraries results in a minuscule 78KB build of fpcalc, rather than the statically compiled ones for various OS and CPU architectures included with Bliss, which weigh in at several megabytes each. I think I’m finally ‘getting’ what open source is all about, which is nice since that was my objective in experimenting with my NAS. To prevent fpcalc building and linking to its dynamic library libchromaprint.so and to get it to detect FFmpeg properly I had to carefully inspect the Makefiles to find the correct build syntax:
FFMPEG_DIR=${TOOLCHAIN} cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_EXAMPLES=ON -DBUILD_SHARED_LIBS=NO .
FFMPEG_DIR is the base folder from which CMake will look for lib/libavcodec.so and include/libavcodec/avfft.h.

For watching the filesystem Bliss uses JNotify to hook into the Linux kernel’s inotify subsystem. Getting this compiled was tricky. It seems no one has reported successfully compiling it for an ARM CPU, and JNotify’s author Omry Yadan wasn’t aware of anyone doing this either. The problem is that compilation halts with these errors:

In file included from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
../inotify-syscalls.h:35:1: error: "__NR_inotify_init" redefined
In file included from /opt/include/sys/syscall.h:25,
                 from ../inotify-syscalls.h:4,
                 from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
/opt/include/asm/unistd.h:344:1: error: this is the location of the previous definition
In file included from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
../inotify-syscalls.h:36:1: error: "__NR_inotify_add_watch" redefined
In file included from /opt/include/sys/syscall.h:25,
                 from ../inotify-syscalls.h:4,
                 from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
/opt/include/asm/unistd.h:345:1: error: this is the location of the previous definition
In file included from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
../inotify-syscalls.h:37:1: error: "__NR_inotify_rm_watch" redefined
In file included from /opt/include/sys/syscall.h:25,
                 from ../inotify-syscalls.h:4,
                 from ../net_contentobjects_jnotify_linux_JNotify_linux.c:43:
/opt/include/asm/unistd.h:346:1: error: this is the location of the previous definition

By searching in a very generic way for a solution I found this post on stackoverflow.com which helped me to patch the header inotify-syscalls.h to get around the problem. Since there was no JDK for ARM embedded systems at build time I included the headers from OpenJDK6 which I took from a Ubuntu VM.

Compiling for Intel also required a fix. I was getting the same error as featured in this post on the JNotify user forum on SourceForge.net:
expected specifier-qualifier-list before ‘pid_t’

Despite some people’s apparent success simply rearranging the order of the includes in net_contentobjects_jnotify_linux_JNotify_linux.c this didn’t help me. I’m not sure quite how I stumbled upon it, but I found the solution staring at me on this page:
The size_t, ssize_t, uid_t, gid_t, off_t and pid_t types are defined as described in sys/types.h

I inserted an additional include in for sys/types.h and it compiled ok. It’s worth pointing out that, although all Intel Synology units are x86-64 architecture, the Oracle JRE for Embedded is x86 (32 bit), so I used the i686 toolchain. Synology DSM’s FFmpeg shared libraries are also 32 bit so my build of fpcalc needed to comply with this. Bliss nonetheless expects the binary to be called fpcalc_linux64 since Bliss is detecting the underlying Linux CPU architecture.

Once I had got past the obstacle of compiling the native code, I needed to liaise back and forth with Dan to understand how Bliss was dealing with its libraries and how I could replace the built-in versions. Originally this was quite a kludge but Dan has since abstracted out the native binaries into their own OSGI bundle fragments which makes things a lot easier, and allows Bliss to survive in-app updates until those native components are superseded. The Synology package provides the following architecture-specific jar files (with corresponding edits to their manifest) which contain the fpcalc binary from Chromaprint. Thank you to Dan for all the quick answers!

  • net.contentobjects.jnotify.linux.ARM_le-0.94.0.jar
  • net.contentobjects.jnotify.linux.PowerPC-0.94.0.jar
  • com.elsten.bliss.policy.tag.auto.linux.x86-1.0.1.jar
  • com.elsten.bliss.policy.tag.auto.linux.amd64-1.0.1.jar
  • com.elsten.bliss.policy.tag.auto.linux.ARM_le-1.0.1.jar (various versions)
  • com.elsten.bliss.policy.tag.auto.linux.PowerPC-1.0.1.jar
Here are my fixes to JNotify to get the native library part compiled, since they may help someone else:

 

Patch for native compile of JNotify 0.94 on ARM Synology

diff -crB jnotify-vanilla/inotify-syscalls.h jnotify-arm/inotify-syscalls.h
*** jnotify-vanilla/inotify-syscalls.h	2005-11-30 15:07:56.000000000 +0000
--- jnotify-arm/inotify-syscalls.h	2012-09-14 02:43:44.032130098 +0100
***************
*** 32,40 ****
  # define __NR_inotify_add_watch	152
  # define __NR_inotify_rm_watch	156
  #elif defined (__arm__)
! # define __NR_inotify_init	316
! # define __NR_inotify_add_watch	317
! # define __NR_inotify_rm_watch	318
  #elif defined (__SH4__)
  # define __NR_inotify_init	290
  # define __NR_inotify_add_watch	291
--- 32,46 ----
  # define __NR_inotify_add_watch	152
  # define __NR_inotify_rm_watch	156
  #elif defined (__arm__)
! # ifndef __NR_inotify_init
! #  define __NR_inotify_init     316
! # endif
! # ifndef __NR_inotify_add_watch
! #  define __NR_inotify_add_watch 317
! # endif
! # ifndef __NR_inotify_rm_watch
! #  define __NR_inotify_rm_watch 318
! # endif
  #elif defined (__SH4__)
  # define __NR_inotify_init	290
  # define __NR_inotify_add_watch	291
diff -crB jnotify-vanilla/Release/subdir.mk jnotify-arm/Release/subdir.mk
*** jnotify-vanilla/Release/subdir.mk	2011-02-28 18:07:20.000000000 +0000
--- jnotify-arm/Release/subdir.mk	2012-09-14 02:29:00.000000000 +0100
***************
*** 17,23 ****
  %.o: ../%.c
  	@echo 'Building file: $<'
  	@echo 'Invoking: GCC C Compiler'
! 	gcc -I/usr/lib/jvm/java-6-sun/include -I/usr/lib/jvm/java-6-sun/include/linux -O3 -Wall -Werror -c -fmessage-length=0 -fPIC -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)" -o"$@" "$<"
  	@echo 'Finished building: $<'
  	@echo ' '
  
--- 17,23 ----
  %.o: ../%.c
  	@echo 'Building file: $<'
  	@echo 'Invoking: GCC C Compiler'
! 	gcc -I/volume1/public/temp/jdk_include/ -I/volume1/public/temp/jdk_include/linux -O3 -Wall -Werror -c -fmessage-length=0 -fPIC -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)" -o"$@" "$<"
  	@echo 'Finished building: $<'
  	@echo ' '
 

Patch for cross compile of JNotify 0.94 for other Synology CPU architectures, using Ubuntu Desktop 12

diff -crB jnotify-vanilla/net_contentobjects_jnotify_linux_JNotify_linux.c jnotify-i686/net_contentobjects_jnotify_linux_JNotify_linux.c
*** jnotify-vanilla/net_contentobjects_jnotify_linux_JNotify_linux.c	2011-02-28 18:07:20.000000000 +0000
--- jnotify-xcomp/net_contentobjects_jnotify_linux_JNotify_linux.c	2012-09-14 00:41:53.455010206 +0100
***************
*** 36,41 ****
--- 36,42 ----
  #include <sys/time.h>
  #include <sys/select.h>
  #include <sys/ioctl.h>
+ #include <sys/types.h>
  #include <errno.h>
  #include <stdio.h>
  #include <unistd.h>
diff -crB jnotify-vanilla/Release/makefile jnotify-i686/Release/makefile
*** jnotify-vanilla/Release/makefile	2011-02-28 18:07:20.000000000 +0000
--- jnotify-xcomp/Release/makefile	2012-09-14 00:37:56.475007855 +0100
***************
*** 28,34 ****
  libjnotify.so: $(OBJS) $(USER_OBJS)
  	@echo 'Building target: $@'
  	@echo 'Invoking: GCC C Linker'
! 	gcc -shared -o"libjnotify.so" $(OBJS) $(USER_OBJS) $(LIBS)
  	@echo 'Finished building target: $@'
  	@echo ' '
  
--- 28,34 ----
  libjnotify.so: $(OBJS) $(USER_OBJS)
  	@echo 'Building target: $@'
  	@echo 'Invoking: GCC C Linker'
! 	$(CC) -shared -o"libjnotify.so" $(OBJS) $(USER_OBJS) $(LIBS)
  	@echo 'Finished building target: $@'
  	@echo ' '
  
diff -crB jnotify-vanilla/Release/subdir.mk jnotify-i686/Release/subdir.mk
*** jnotify-vanilla/Release/subdir.mk	2011-02-28 18:07:20.000000000 +0000
--- jnotify-xcomp/Release/subdir.mk	2012-09-14 00:37:33.835007731 +0100
***************
*** 17,23 ****
  %.o: ../%.c
  	@echo 'Building file: $<'
  	@echo 'Invoking: GCC C Compiler'
! 	gcc -I/usr/lib/jvm/java-6-sun/include -I/usr/lib/jvm/java-6-sun/include/linux -O3 -Wall -Werror -c -fmessage-length=0 -fPIC -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)" -o"$@" "$<"
  	@echo 'Finished building: $<'
  	@echo ' '
  
--- 17,23 ----
  %.o: ../%.c
  	@echo 'Building file: $<'
  	@echo 'Invoking: GCC C Compiler'
! 	$(CC) -I$(TOOLCHAIN)/include -O3 -Wall -Werror -c -fmessage-length=0 -fPIC -MMD -MP -MF"$(@:%.o=%.d)" -MT"$(@:%.o=%.d)" -o"$@" "$<"
  	@echo 'Finished building: $<'
  	@echo ' '
 
 

Unified Windows PE 4.0 builder for Windows ADK

This script will build Windows PE 4.0 (for x86, or AMD64 or both) including scripts and drivers of your choosing, it will create ISO images with both BIOS and UEFI support, and will also upload the resulting WIM boot images to your WDS server automatically (and freshen them if they have been re-created). This reduces the tiresome task of boot image maintenance to just a couple of clicks.

It uses only the standard Microsoft Windows ADK tools, which is the new name for WAIK. Just save the code below as Build_WinPE.cmd and right-click on it to Run as Administrator. Notice the defined variables at the start, particularly the %SOURCE% folder. It supports using either the 32bit or the 64bit ADK, and only the Windows PE and Deployment Tools ADK components are required. The script expects the following folders:

  • %SOURCE%\scripts\WinPE – any additional scripts (e.g. OS build scripts)
  • %SOURCE%\drivers\WinPE-x86\CURRENT – drivers
  • %SOURCE%\drivers\WinPE-AMD64\CURRENT
  • %SOURCE%\tools\WinPE-x86 – optional tools such as GImageX, or apps from portableapps.com
  • %SOURCE%\tools\WinPE-AMD64

Notice the optional components section at lines 90-95. Modify this if you need your image to contain additional items, for instance PowerShell or .NET Framework 4.

One further observation is that Macs don’t seem to be able to boot this version of Windows PE. I’m not sure whether this is a GOP display driver issue, or whether only true UEFI firmwares are required (Macs are EFI which is an earlier specification). To carry out an unattended Windows 8 install on a Mac via BootCamp you will need to build a Windows PE 3.0 ISO since Macs can’t PXE boot.

There’s some more info about UEFI booting on 32bit architectures here – apparently UEFI 2.3.1 compliance is a requirement. My VAIO’s Insyde H2O UEFI firmware certainly seems to ignore EFI loaders.

:: Build_WinPE.cmd
::
:: patters 2012
::
:: This script will build x86 and AMD64 Windows PE 4.0, automatically
:: collecting drivers from the relevant folders within the
:: unattended installation, building WIM and ISO images, and
:: will also upload the WIM images to the deployment server(s).
::
:: DO NOT cancel this script in progress as you can end up with
:: orphaned locks on files inside mounted WIM images which
:: usually require a reboot of the server to clear.
::

@echo off
setlocal ENABLEDELAYEDEXPANSION

::variables
     set SOURCE=\\WDSSERVER\unattended
     set PE_TEMP=C:\temp
     ::WinPE feature pack locale
     set PL=en-US
     ::commma separated list for WDS_SERVERS
     set WDS_SERVERS=WDSSERVER1,WDSSERVER2
::end variables

if "%PROCESSOR_ARCHITECTURE%"=="x86" set PRGFILES32=%PROGRAMFILES%
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" set PRGFILES32=%PROGRAMFILES(X86)%

if not exist "%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\*.*" (
     echo This script requires the Windows Assessment and Deployment Kit to be installed
     echo Download it from http://www.microsoft.com/en-us/download/details.aspx?id=30652
     echo.
     pause
     goto :eof
)
if "%1"=="relaunch" (
     call :BUILD_WINPE %2 %3 %4
     goto :eof
)
if "%1"=="unmount" (
     :: use this if you have a problem with the script and there are WIMs still mounted
     dism /Unmount-Wim /MountDir:"%PE_TEMP%\WinPE-x86\mount" /discard
     dism /Unmount-Wim /MountDir:"%PE_TEMP%\WinPE-AMD64\mount" /discard
     goto :eof
)
:prompt
cls
set /P SELECTION=Build WinPE for which CPU architecture (AMD64, x86, both)? [AMD64]: 
if "%SELECTION%"=="" set SELECTION=AMD64
if "%SELECTION%"=="amd64" set SELECTION=AMD64
if "%SELECTION%"=="X86" set SELECTION=x86
if "%SELECTION%"=="b" set SELECTION=both
if "%SELECTION%"=="BOTH" set SELECTION=both
if "%SELECTION%"=="AMD64" (
     start "Building Windows PE for AMD64 - NEVER CANCEL THIS SCRIPT IN PROGRESS" cmd /c "%0" relaunch AMD64
     goto :eof
)
if "%SELECTION%"=="x86" (
     start "Building Windows PE for x86 - NEVER CANCEL THIS SCRIPT IN PROGRESS" cmd /c "%0" relaunch x86
     goto :eof
)
if "%SELECTION%"=="both" (
     ::opening both instances of this script simultaneously seems to cause race conditions with dism.exe
     start /wait "Building Windows PE for x86 - NEVER CANCEL THIS SCRIPT IN PROGRESS" cmd /c "%0" relaunch x86 nopause
     start "Building Windows PE for AMD64 - NEVER CANCEL THIS SCRIPT IN PROGRESS" cmd /c "%0" relaunch AMD64
     goto :eof
)
goto :prompt

:BUILD_WINPE
set PE_ARCH=%1
set OSCDImgRoot=%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PROCESSOR_ARCHITECTURE%\Oscdimg
set WinPERoot=%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Windows Preinstallation Environment
set DandIRoot=%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools
set DISMRoot=%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PROCESSOR_ARCHITECTURE%\DISM
set PATH=%PATH%;%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PROCESSOR_ARCHITECTURE%\Oscdimg
set PATH=%PATH%;%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PROCESSOR_ARCHITECTURE%\BCDBoot
set PATH=%PATH%;%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PROCESSOR_ARCHITECTURE%\DISM
set PATH=%PATH%;%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Windows Preinstallation Environment
echo on
rd /s /q %PE_TEMP%\WinPE-%PE_ARCH%
call copype.cmd %PE_ARCH% %PE_TEMP%\WinPE-%PE_ARCH%
::package path
set PP=%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Windows Preinstallation Environment\%PE_ARCH%\WinPE_OCs
::image path
set IP=%PE_TEMP%\WinPE-%PE_ARCH%\mount
echo on
dism /Mount-Wim /WimFile:"%PE_TEMP%\WinPE-%PE_ARCH%\media\sources\boot.wim" /Index:1 /MountDir:"%IP%"
dism /image:"%IP%" /Add-Package /PackagePath:"%PP%\WinPE-Scripting.cab"^
 /PackagePath:"%PP%\%PL%\WinPE-Scripting_%PL%.cab" /PackagePath:"%PP%\WinPE-WMI.cab"^
 /PackagePath:"%PP%\%PL%\WinPE-WMI_%PL%.cab" /PackagePath:"%PP%\WinPE-MDAC.cab"^
 /PackagePath:"%PP%\%PL%\WinPE-MDAC_%PL%.cab" /PackagePath:"%PP%\WinPE-HTA.cab"^
 /PackagePath:"%PP%\%PL%\WinPE-HTA_%PL%.cab" /PackagePath:"%PP%\WinPE-Dot3Svc.cab"^
 /PackagePath:"%PP%\%PL%\WinPE-Dot3Svc_%PL%.cab"
dism /image:"%IP%" /Add-Driver /driver:"%SOURCE%\drivers\WinPE-%PE_ARCH%\CURRENT" /Recurse
copy "%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PE_ARCH%\BCDBoot\bootsect.exe" "%IP%\Windows"
copy /y "%SOURCE%\scripts\WinPE\*.*" "%IP%\Windows\System32"
copy "%SOURCE%\tools\WinPE-%PE_ARCH%\*.*" "%IP%\Windows\System32"
copy /y "%PRGFILES32%\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\%PE_ARCH%\DISM\imagex.exe" "%IP%\Windows\System32"
dism /Unmount-Wim /MountDir:"%IP%" /commit

::Mac OS BootCamp will look for autorun.inf in order to validate this disk as a Windows Installer CD
::adding this allows us to start unattended installs using WinPE
date /T > "%PE_TEMP%\WinPE-%PE_ARCH%\media\autorun.inf"

::bootable ISO includes both BIOS & EFI boot loaders
oscdimg -m -o -u2 -udfver102 -bootdata:2#p0,e,b"%PE_TEMP%\WinPE-%PE_ARCH%\fwfiles\etfsboot.com"#pEF,e,b"%PE_TEMP%\WinPE-%PE_ARCH%\fwfiles\efisys.bin" "%PE_TEMP%\WinPE-%PE_ARCH%\media" "%PE_TEMP%\WinPE-%PE_ARCH%\WinPE-40-%PE_ARCH%.iso"
@echo off

::rename the WIM file to avoid having multiple image files on the WDS server with the same filename
ren "%PE_TEMP%\WinPE-%PE_ARCH%\media\sources\boot.wim" boot_%PE_ARCH%.wim

if "%PE_ARCH%"=="x86" set WDS_ARCH=%PE_ARCH%
if "%PE_ARCH%"=="AMD64" set WDS_ARCH=X64
for %%i in (%WDS_SERVERS%) do (
     echo.
     echo Adding/updating boot image on WDS server: %%i
     :: try to add the image first, if that fails then replace existing
     wdsutil /Verbose /Progress /Add-Image /ImageFile:"%PE_TEMP%\WinPE-%PE_ARCH%\media\sources\boot-40-%PE_ARCH%.wim"^
      /Server:%%i /ImageType:Boot /Name:"Microsoft Windows PE 4.0 (%PE_ARCH%)" || wdsutil /Verbose /Progress /Replace-Image^
      /Image:"Microsoft Windows PE 4.0 (%PE_ARCH%)" /ImageType:Boot /Architecture:%WDS_ARCH% /ReplacementImage^
      /Name:"Microsoft Windows PE 4.0 (%PE_ARCH%)" /ImageFile:"%PE_TEMP%\WinPE-%PE_ARCH%\media\sources\boot-40-%PE_ARCH%.wim"^
      /Server:%%i
     echo.
)
::rename the WIM back again so bootable USB devices can be created
ren "%PE_TEMP%\WinPE-%PE_ARCH%\media\sources\boot-40-%PE_ARCH%.wim" boot.wim
echo *******************************************************************
echo WDS boot image(s) updated
echo.
echo A bootable ISO of this image has been created at:
echo   %PE_TEMP%\WinPE-%PE_ARCH%\WinPE-40-%PE_ARCH%.iso
echo.
echo To create a bootable USB key, use diskpart.exe to create a FAT32 partition
echo and mark it active, then copy the contents of this folder to its root:
echo   %PE_TEMP%\WinPE-%PE_ARCH%\media
echo.
echo FAT32 is required for EFI support.
echo.
if "%2"=="nopause" goto :eof
pause
goto :eof

Windows software deployment and update script

For many years I have used scripts of my own design to build workstations and to roll out software updates. At the time I created these I found that most of the tools which could accomplish these tasks were unwieldy. Group Policy software deployment in particular never really seemed fit for purpose since it extended login times so dramatically. My experience gained in a previous job spent packaging applications for deployment had taught me that all installed software populates consistent information in the Windows Registry, so in my current job I tended to audit this data directly via my scripts. This was saved into an SQL database from where it could be queried, or manipulated via a data source in Excel.

I’m working my notice period at the moment ready for a new job I’ll start in October, and so I’m going over the stuff I have created in the current job in order to prepare my handover documents. Mindful of the dependency my current employer has on these custom scripts I decided to get a quote for a Dell KACE solution, thinking that since it’s a Virtual Appliance, and since there are only 150 PCs here it shouldn’t be too expensive – after all it’s only really providing what my scripts already do (workstation builds, drivers, software deployment, and auditing). But here’s the thing – they wanted something like £13,000! (I can’t recall the precise figure). To put it in context this figure is around one third of the cost of replacing all the workstations with new ones, or say half the annual salary of an IT support technician – quite out of the question.

Unsurprisingly I have decided instead to simply tidy up my scripts to make them easier to use. Sure, you could accomplish these tasks with SCCM but that’s not free either. In an SME, why spend huge amounts of money on something that can be automated without much trouble using mechanisms that are built in. Heck, even the uninstall command line is stored in the registry for virtually all software – that’s how the Add/Remove Programs Control Panel works! And most software can be installed silently in the desired way provided you research the command line arguments to do so. It’s no accident that AppDeploy.com which was a great crowdsourced repository of this knowledge became KACE which was then acquired by Dell. It still exists, though the content doesn’t seem to be as well maintained as it was.

I have used a startup script written in VBScript to keep software up to date on workstations. A startup script runs as the SYSTEM account so permissions are not an issue. Since I also maintain an unattended installation I already have a package folder with all the scripts to install each package. All I needed to code was a way to audit the Registry for each package and add some logic around that. Up until now, I had tended to write sections of the script specifically tailored for each package, and from there it’s not much of a stretch to apply packages to a workstation based on its OS version, or Active Directory OU or group membership. For the script I have published below, I have recreated this logic as a single function which can be invoked with a one line entry for each package (see the highlighted part) – everything else is taken care of. I hope it helps someone to save £13,000 :)

 

Sample script output

Running software package check for Adobe Flash Player...
  Registry data found at branch "Adobe Flash Player ActiveX"
  Comparing detected version 11.3.300.271 against desired version 11.4.402.265
  Removing old version 11.3.300.271
    Killing iexplore.exe
    Override detected, running "u:\packages\flash\uninstall_flash_player.exe -uninstall"
    u:\packages\flash\uninstall_flash_player.exe -uninstall
  Installing Adobe Flash Player 11.4.402.265

Running software package check for Paint.NET...
  Registry data found at branch "{529125EF-E3AC-4B74-97E6-F688A7C0F1C0}"
  Comparing detected version 3.60.0 against desired version 3.60.0
  Paint.NET is already installed and up to date.

Running software package check for Adobe Reader...
  Registry data found at branch "{AC76BA86-7AD7-1033-7B44-AA0000000001}"
  Comparing detected version 10.0.0 against desired version 10.1.4
  Removing old version 10.0.0
    Using UninstallString from the Registry, plus "/qb-!"
    MsiExec.exe /I{AC76BA86-7AD7-1033-7B44-AA0000000001} /qb-!
  Installing Adobe Reader 10.1.4

Running software package check for Photo Gallery...
  Registry data found at branch "{60A1253C-2D51-4166-95C2-52E9CF4F8D64}"
  Comparing detected version 16.4.3503.0728 against desired version 16.4.3503.0728
  Photo Gallery is already installed and up to date.

Running software package check for Mendeley Desktop...
  Installing Mendeley Desktop 1.6
 

The script

'startup.vbs
'patters 2006-2012

Option Explicit
Dim objNetwork, objShell, objReg, strKey, colProcess, objProcess, arrSubKeys 
Dim strFileServer
Const HKEY_CURRENT_USER = &H80000001
Const HKEY_LOCAL_MACHINE = &H80000002

'set up objects
Set objNetwork = CreateObject("WScript.Network")
Set objShell = CreateObject("WScript.Shell")
Set objReg = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\default:StdRegProv")

strFileServer = "YOURSERVERHERE"
MapNetworkDrive "U:","unattended"

Package "flash.cmd", "Adobe Flash Player", "11.4.402.265", "u:\packages\flash\uninstall_flash_player.exe -uninstall", False, True, "iexplore.exe"    
Package "paintnet.cmd", "Paint.NET", "3.60.0", "/qb-!", False, False, "" 
Package "adobe.cmd", "Adobe Reader", "10.1.4","/qb-!",False, False, array("outlook.exe","iexplore")
Package "photogal.cmd", "Photo Gallery", "16.4.3503.0728", "/qb-!", False, False, "iexplore.exe"
Package "mendeley.cmd", "Mendeley Desktop", "1.6", "/S", True, False, "winword.exe"

objNetwork.RemoveNetworkDrive "U:", True, True
WScript.Echo VbCrLf & "Finished software checks"


Function Package(strPackageName, strTargetDisplayName, strTargetVersion, strExtraUninstParams, boolExtraUninstQuotes, boolUninstForceOverride, ProcessToKill)

  '=============================================================================

  'To understand this function you need to know that installed software packages
  'will populate keys below these branches of the Registry:
  '  HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
  '  HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall
  '    (the latter for 32bit software on 64bit Windows)
  'This is the data that is mined when you look at Add/Remove Programs
  'in the Control Panel 

  'strPackageName is the package script on your package server (e.g. flash.cmd)

  'strTargetDisplayName can be a full or partial match of the Registry key
  'DisplayName (matches from the left)
  '  "Java(TM)" would match "Java(TM) 6 Update 5" and all other versions

  'strTargetVersion is the full version number from DisplayVersion in the Registry
  'Each decimal point of precision will be compared in turn.

  'If the Registry key DisplayVersion is not used by a package, the same number
  'of digits is parsed from the right hand side of the DisplayName string

  'strExtraUninstParams is used when you want to override the command line
  'specified by QuietUninstallString in the Registry, or for when that value is
  'missing for example, sometimes InnoSetup packages will specify the switch
  '/SILENT in QuietUninstallString, but you may need to override by appending
  '/VERYSILENT to the command line in UninstallString
  'If neither QuietUninstallString and UninstallString are present, the script
  'will use strExtraUninstParams as the full uninstall command line
  
  'Some packages define UninstallString as a long filename but forget to
  'surround it with quotes. You can correct this by setting
  'boolExtraUninstQuotes = True
  '   Package "mendeley.cmd", "Mendeley Desktop", "1.6", "/S", True, False, "winword.exe"

  'In some cases you may want to ignore the value of both QuietUninstallString
  'and UninstallString and override the command completely. To do this, set
  'boolUninstForceOverride to True
  '   Package "flash.cmd", "Adobe Flash Player", "11.4.402.265", "u:\packages\flash\uninstall_flash_player.exe -uninstall", False, True, "iexplore.exe"

  'Finally, ProcessToKill is a string or array containing the name(s) of any
  'running process(es) you need to kill, if plugins are being installed for Word
  'or Internet Explorer for instance.

  '=============================================================================

  Dim arrBranches, strBranch, boolRemoval, strActualDisplayName, strActualVersion
  Dim strQuietUninstall, strUninstall
  WScript.Echo VbCrLf & "Running software package check for " & strTargetDisplayName & "..."
  'we need to iterate through both the 32 and 64bit uninstall branches of the Registry
  arrBranches = Array("SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\", "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\")
  For Each strBranch In arrBranches
    'firstly, remove old version of package if it's present
    objReg.EnumKey HKEY_LOCAL_MACHINE, strBranch, arrSubKeys
    If IsArray(arrSubkeys) Then
      For Each strKey in arrSubkeys
        objReg.GetStringValue HKEY_LOCAL_MACHINE, strBranch & strKey, "DisplayName", strActualDisplayName
        If Left(strActualDisplayName, Len(strTargetDisplayName)) = strTargetDisplayName Then
          'we've found the target software package
          WScript.Echo "  Registry data found at branch """ & strKey & """"
          'is there a version string (not all software will have one)?
          objReg.GetStringValue HKEY_LOCAL_MACHINE, strBranch & strKey, "DisplayVersion", strActualVersion
          If Not IsNull(strActualVersion) Then
          Else
            'if there's no version string we'll try to grab the same number of chars from the right hand side of the DisplayName string  
            strActualVersion = Right(strActualDisplayName, Len(strTargetVersion))
          End If
          If (IsUpgradeNeeded (strActualVersion,strTargetVersion)) = True Then
            strQuietUninstall = ""
            WScript.Echo "  Removing old version " & strActualVersion
            KillProcess ProcessToKill
            'check the package's registry settings
            objReg.GetStringValue HKEY_LOCAL_MACHINE, strBranch & strKey, "UninstallString", strUninstall
            objReg.GetStringValue HKEY_LOCAL_MACHINE, strBranch & strKey, "QuietUninstallString", strQuietUninstall
            If Not strExtraUninstParams = "" Then
              'Extra parameters were sent to the function
              If boolUninstForceOverride = True Then
                'Entire uninstall command line was forced so use strExtraUninstParams, regardless of what's in the Registry
                WScript.Echo "    Override detected, running """ & strExtraUninstParams & """"
                WScript.Echo "    " & strExtraUninstParams
                WinExec strExtraUninstParams
              ElseIf Not IsNull(strUninstall) Then
                'use the basic UninstallString plus the additional parameters
                If boolExtraUninstQuotes = True Then
                  strUninstall = """" & strUninstall & """"
                End If
                strUninstall = strUninstall & " " & strExtraUninstParams
                WScript.Echo "    Using UninstallString from the Registry, plus """ & strExtraUninstParams & """"
                WScript.Echo "    " & strUninstall
                WinExec strUninstall
              Else
                'no UninstallString was found in the Registry, so assume that strExtraUninstParams is the full removal command line
                WScript.Echo "    No UninstallString found, running """ & strExtraUninstParams & """"
                WScript.Echo "    " & strExtraUninstParams
                WinExec strExtraUninstParams
              End If
            Else
              'No extra parameters were sent to the function
              'if there's already a value for QuietUninstallString then use that command line
              If Not IsNull(strQuietUninstall) Then
                WScript.Echo "    Using QuietUninstallString directly from the Registry"
                WScript.Echo "    " & strQuietUninstall
                WinExec strQuietUninstall
              ElseIf Not IsNull(strUninstall) Then
                'no QuietUninstallString was found, fall back to UninstallString
                If boolExtraUninstQuotes = True Then
                  strUninstall = """" & strUninstall & """"
                End If
                WScript.Echo "    Using UninstallString directly from the Registry"
                WScript.Echo "    " & strUninstall
                WinExec strUninstall
              Else
                WScript.Echo "    ERROR - this package doesn't seem to have any UninstallString defined - you'll need to send one to the Package function (see script source for details)"
                Exit Function
              End If
            End If
          Else
            'IsUpgradeNeeded (strActualVersion,strTargetVersion) is False
            'package was detected, but version is >= than the one specified
            WScript.Echo "  " & strTargetDisplayName & " is already installed and up to date."
            Exit Function
          End If
        End If
      Next
    End If
  Next
  'install package
  WScript.Echo "  Installing " & strTargetDisplayName & " " & strTargetVersion
  KillProcess ProcessToKill
  WinExec "U:\packages\" & strPackageName
End Function


Function IsUpgradeNeeded(strVerActual,strVerDesired)
  Dim arrActualVersion, arrDesiredVersion, i
  'Break software version down on decimal points
  arrActualVersion = split(strVerActual,".")
  arrDesiredVersion = split(strVerDesired,".")
  WScript.Echo "  Comparing detected version " & strVerActual & " against desired version " & strVerDesired
  'iterate, comparing each sub-version number starting from left
  For i = 0 To UBound(arrActualVersion)
    'WScript.Echo "  comparing digit... is " & arrActualVersion(i) & " less than " & arrDesiredVersion(i) 
    If arrActualVersion(i) < arrDesiredVersion(i) Then
      'installed version is out of date
      IsUpgradeNeeded = True
      Exit Function
    ElseIf arrActualVersion(i) > arrDesiredVersion(i) Then
      'installed version is newer
      IsUpgradeNeeded = False
      Exit Function     
    End If
  Next
  'thus far the version numbers are the same, but there may be additional
  'decimal points of precision in the desired version
  '  e.g. Adobe Reader 10.1.4 is newer than 10.1
  If UBound(arrDesiredVersion) > UBound(arrActualVersion) Then
    IsUpgradeNeeded = True
  Else
    IsUpgradeNeeded = False
  End If
End Function


Function MapNetworkDrive(strDriveLetter, strSharePath)
  On Error Resume Next
  'if the share name is not a UNC path, assume it's on the normal fileserver
  If Not Left(strSharePath,2) = "\\" Then
    strSharePath = "\\" & strFileServer & "\" & strSharePath
  End If
  If objFSO.DriveExists(strDriveLetter) Then
    objNetwork.RemoveNetworkDrive strDriveLetter, True, True
  End If
  objNetwork.MapNetworkDrive strDriveLetter, strSharePath
  If Err.Number <> 0 Then
    WScript.Echo "Error - " & Err.Description
    Err.Clear
  End If
  On Error Goto 0
End Function


Function WinExec(strExec)
  Dim objExec, eTime
  WinExec = True
  Set objExec = objShell.Exec(strExec)
  eTime = DateAdd("s", 120, Now)
  Do While objExec.Status = 0
    WScript.Sleep 1000
  Loop
End Function


Function KillProcess(Process)
  Dim strProcessElement
  If IsArray(Process) Then
    For Each strProcessElement in Process
      KillIndividualProcess(strProcessElement)
    Next
  ElseIf Not Process = "" Then
    KillIndividualProcess(Process)
  End If
End Function


Function KillIndividualProcess(strProcess)
  Dim colProcess, objProcess
  Set colProcess = objWMI.ExecQuery("Select * from Win32_Process")
  For Each objProcess in colProcess
    If LCase(objProcess.Name) = LCase(strProcess) Then
      WScript.Echo "    Killing " & strProcess
      'occasionally one parent process may kill all children leading to an object error
      'so disable error handling temporarily
      On Error Resume Next
      objProcess.Terminate()
      On Error Goto 0
    End If
  Next
End Function

Deploying Windows Photo Gallery 2012

Windows-Photo-Gallery

Though it seems primarily pitched at home users, Microsoft’s Windows Photo Gallery is a useful image management tool even in a professional environment. It’s distributed as part of a suite of software known collectively as Windows Essentials 2012. I don’t understand why these tools aren’t included in Windows itself, but since they were until recently part of the Live family I’m presuming that they were designed to encourage the use of Microsoft’s online services. The apparent home user bias to the setup (a single installer for the whole suite, which downloads on demand, which asks for a Live sign-in, and which alters homepage and search provider) consequently makes Photo Gallery quite difficult to deploy and automate.

Firstly the proper offline installer package is tucked away here on Microsoft’s website.

The next issue is that the silent install switches don’t seem to be officially documented by Microsoft. I was able to piece together the working command line using a TechNet forum post, this blog post about deploying the 2011 version, and some stuff on the MSFN forum.

What held me up for a while is that you can no longer target only the Photo Gallery app – MovieMaker and Photo Gallery are bundled together with 2012. So I arrived at this one-liner which I invoke from a more complex workstation startup script, if it’s needed:

start /wait WLSetup-all.exe /q /r:n /NOToolbarCEIP /NOhomepage /nolaunch /nosearch /AppSelect:MovieMaker /log:%TEMP%\WLEsetup.log

The HKCU registry customizations are pretty much the same as for the 2011 version, so to suppress the EULA and Microsoft account sign-in prompt, and to prevent nags about file type associations you will need to set the following in your login script (this is an extract from my VBScript one, but it’s pretty human-readable):

...
'default preferences for Microsoft Photo Library (agree EULA, don't steal filetype associations, no Windows Live sign-in)
objReg.CreateKey HKEY_CURRENT_USER,"Software\Microsoft\Windows Live"
objReg.CreateKey HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Common"
objReg.SetStringValue HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Common","TOUVersion","16.0.0.0"
objReg.CreateKey HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Photo Gallery"
objReg.SetDWORDValue HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Photo Gallery","SignInRemindersLeft","0"
objReg.CreateKey HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Photo Gallery\Library"
arrStringValues = Array(".WDP",".BMP",".JFIF",".JPEG",".JPE",".JPG",".PNG",".TIF",".DIB",".TIFF",".ICO")
objReg.SetMultiStringValue HKEY_CURRENT_USER,"Software\Microsoft\Windows Live\Photo Gallery\Library", "DontShowAssociationsDialogExtensions", arrStringValues
...

To install on a Windows 8 workstation you’ll need the .Net Framework 3.5 “feature” to be installed, which isn’t there by default (Control Panel > Programs > Uninstall a program > Turn Windows Features on or off). This is problematic if you’re using WSUS – the attempt to download the update will fail with error 0x800f0906. Microsoft have an MDSN article about this, but the prescribed fix of using DISM to fetch the feature from the install media didn’t work for me on Windows 8 Enterprise. I had to remove my PC from an OU which inherits WSUS settings, run gpupdate /force then try again, this time successfully.

In my organization, the requirement for Photo Gallery is for users to interact with a centralized image library. This is stored on a Window 2008 R2 server, and I discovered that I could not add this folder to the Pictures library unless it was indexed on the server side (well, without enabling offline folders – which I don’t want). The relevant information on this topic can be found in this Technet post. In summary, you need to enable the Windows Search Service on the file server, which is a “Role Service” under the File Services role in Server Manager.

The missing piece of the puzzle so far is how to programmatically add this image repository location to each user’s Pictures library. I found a page about this, though the tools did not seem to actually work. Admittedly it’s a few years old, so maybe there are some more official tools now. More research to follow…

Synology NAS for SME workloads – my design decisions

Synology-RS2212RP+

Background

It all started last year with a humble single drive DS111 which I bought for home, speculating that I could probably run Serviio on it to stream my movies to my DLNA enabled Bluray player. I managed to compile FFmpeg and its dependencies, install Java and get Serviio running and it has been golden ever since. Since then I’ve learned a lot more about Linux, packaged many other pieces of software and reverse-engineered my own Synology package repository, which at the last count over 5,000 NASes had checked into. The value of the Synology products lies in their great DSM software.

A few months later at work I decided to buy an RS411 to replace an appallingly slow FireWire Drobo (sub 10MB/sec!) that one of my colleagues was using to store video production rushes. Using the same four 2TB drives that had been inside the Drobo it has behaved impeccably – and it is currently in the process of mirroring 3TB to CrashPlan courtesy of my own app package. Having passed this test, I decided that Synology was a worthy candidate for a more serious purchase. However I noticed that there wasn’t much information about their SME products online so I’m sharing my research here.

The need for 2nd tier storage

I have used a 15K SAS EqualLogic iSCSI storage array for VMware workloads since 2009 but this is quite full. It can’t accommodate the data which I need to migrate from older arrays which are at end of life. This data (most of it flat files) is very much 2nd tier – I need lots of space, but I don’t really care too much about latency or throughput. It’s also predominantly static data. I do need at least some additional VMware storage, so I can use vMotion to decant and re-arrange VMs on other storage arrays. A larger Synology rack mount NAS therefore presents itself as a very good value way of addressing this need, while keeping the risk of failure acceptably low.

Which model?

The choice is actually pretty easy. A redundant power supply is fairly mandatory in business since this is the most likely thing to fail after a drive. The unit is under warranty for three years, but can you really survive for the day or two of downtime it would most likely take to get a replacement on site? Once this requirement is considered, there are actually very few models to choose from – just three in fact: the RS812RP+ (1U 4 bay), the RS2212RP+ (2U 10 bay) and the RS3412RPxs (2U 10 bay). The 4 bay model isn’t big enough for my needs so that narrows the field. The RS3412RPxs is quite a price hike over the RS2212RP+, but you do get 4 x 1GbE ports with the option of an add-in card with 2 x 10GbE ports. Considering that the EqualLogic PS4000XV unit I’m using for 1st tier storage manages fine with 2 x 1GbE on each controller I think this is a little overkill for my needs and besides, I don’t have any 10GbE switches (more on bandwidth later). Other physical enhancements are a faster CPU, ECC server RAM, and the ability to add two InfiniBand-connected RS1211 (2U 12 bay) expansion units rather than one.

Synology now offer VMware certified VAAI support from DSM 4.1 on the xs models (currently in beta). They actually support more VAAI primitives than EqualLogic, including freeing up space on a thin provisioned LUN when blocks are deleted. Dell EqualLogic brazenly advertised this as a feature in a vSphere 4.0 marketing PDF back in 2009 and I had to chase tech support for weeks to discover that it was “coming soon”. To date this functionality is still missing. The latest ETA is that it will ship with EqualLogic firmware 6.00 whenever that arrives. Though this is a software feature, Synology are using it to differentiate the more expensive products. More CPU is required during these VAAI operations, though blogger John Nash suggests that it isn’t much of an overhead.

If you need high performance during cloning and copying operations, or are considering using a Synology NAS as your 1st tier storage then perhaps you should consider the xs range.

Which drives?

The choices are bewildering at first. Many of the cheapest drives are the ‘green’ ones which typically spin at 5,400RPM. Their performance won’t be as good as 7,200RPM models, and they also tend to have more aggressive head parking and spindown timer settings in their firmwares. Western Digital ones are notorious for this to the extent that these drives have dramatically shorter lifespans if this is not disabled. To do this is tedious and requires a PC and a DOS boot disk. Making a bootable MS-DOS USB key can try the patience of even the calmest person!

UPDATE – it seems from the Synology Forum that DSM build 3.2-1922 and later will automatically disable the idle timer on these drives. You can check the status by running this one-liner while logged into SSH as root:

for d in `/usr/syno/bin/synodiskport -sata` ; do echo "*** /dev/$d ***"; /usr/syno/bin/syno_disk_ctl --wd-idle -g /dev/$d; done

You can force the disabling of the timer with that same tool:

/syno/bin/syno_disk_ctl --wd-idle -d /dev/sda

The next choice is between Enterprise class and Desktop class drives. This is quite subjective, because for years we have been taught that only SCSI/SAS drives were meant to be sufficiently reliable for continuous use. Typically the Enterprise class drives will have a 5 year manufacturer warranty and the Desktop ones will have 3 years. Often it takes a call to the manufacturer’s customer service helpline to determine the true warranty cover for a particular drive since retailers often seem to misreport this detail on their website. The Enterprise ones are significantly more expensive (£160 Vs £90 for a 2TB drive).

There is one additional feature on Enterprise class drives – TLER (Time Limited Error Recovery). There are a few articles about how this relates to RAID arrays and it proved quite a distraction while researching this NAS purchase. The concept is this: when a drive encounters a failed block during a write operation there is a delay while that drive remaps the physical block to the spare blocks on the drive. On a desktop PC your OS would hang for a moment until the drive responds that the write was successful. A typical hardware RAID controller is intolerant of a delay here and will potentially fail the entire drive if this happens, even though only a single block is faulty. TLER allows to drive to inform the controller that the write is delayed, but not failed. The side effect of not having TLER support would be frequent drive rebuilds from parity, which can be very slow when you’re dealing with 2TB disks – not to mention impairing performance. The good news though, is that the Synology products use a Linux software RAID implementation, so TLER support becomes irrelevant.

Given what’s at stake it’s highly advisable to select drives which are in the Synology HCL. The NAS may be able to overcome some particular drive firmware quirks in software (like idle timers on some models etc.), and their presence on the list does also mean that Synology have tested them thoroughly. I decided to purchase 11 drives so I would have one spare on site ready for insertion directly after a failure. RAID parity can take a long time to rebuild so you don’t want to be waiting for a replacement. Bear in mind that returning a drive under a manufacturer warranty could take a week or two.

Apparently one of the value-added things with enterprise grade SAN storage is that individual drives will be selected from different production batches to minimize the chances of simultaneous failures. This does remain a risk for NAS storage, and all the RAID levels in the world cannot help you in that scenario.

My Order

  • Bare RS2212RP+ 10 bay rackmount NAS (around £1,700 – all prices are excluding VAT).
  • 11 x Hitachi Desktar HDS723020BLA642 drives including 3 year manufacturer warranty (around £1,000).
  • The unit has 1GB of DDR3 RAM soldered on the mainboard, with an empty SODIMM slot which I would advise populating with a 2GB Kingston RAM module, part number KVR1066D3S8S7/2G (a mere £10), just in case you want to install additional software packages later.
  • Synology 2U sliding rack rail kit, part number 13-082URS010 (£80). The static 1U rail kit for the RS411 was pretty poorly designed but this one is an improvement. It is still a bit time consuming to set up compared to modern snap-to-fix rails from the likes of Dell and HP.

Setup – How to serve the data

A Synology NAS offers several ways to actually store the data:

  • Using the NAS as a file server in its own right, using SMB, AFP, or NFS
  • iSCSI at block level (dedicated partitions)
  • iSCSI at file level (more flexible, but a small performance hit)

For my non-critical RS411, using it as an Active Directory integrated file server has proved to be very reliable. However, for this new NAS I needed LUNs for VMware. I could have perhaps defined a Disk Group and dedicated some storage to iSCSI, and some to a normal ext4 volume. I had experimented with iSCSI, but there are several problems:

  • Online research does reveal that there have been some significant iSCSI reliability issues on Synology, though admittedly these issues could possibly date from when DSM first introduced iSCSI functionality.
  • To use iSCSI multipathing on Synology the two NAS network interfaces must be on separate subnets. This is at odds with the same-subnet approach of Dell EqualLogic storage, which the rest of my VMware infrastructure uses. This would mean that hosts using iSCSI storage would need additional iSCSI initiators, significantly increasing complexity.
  • It is customary to isolate iSCSI traffic onto a separate storage network infrastructure, but the Synology NAS does not possess a separate management NIC. So if it is placed on a storage LAN it will not be easily managed/monitored/updated, nor even be able to send email alerts when error conditions arise. This was a show-stopper for me. I think Synology ought to consider at least allowing management traffic to use a different VLAN even if it must use the same physical NICs. However, VLANing iSCSI traffic is something most storage vendors advise against.

All of which naturally lead us onto NFS which is very easy to configure and well supported by VMware. Multipathing isn’t possible for a single NFS share, so the best strategy is to bond the NAS network interfaces into a link aggregation group (‘Route Based on IP Hash’). This does mean however, that no hypervisor’s connection to the NFS storage IP can use more than 1Gbps of bandwidth. This gives a theoretical peak throughput of 1024/8 = 128MB/sec. Considering that each individual SATA hard disk in the array is capable of providing roughly this same sustained transfer rate, this figure is somewhat disappointing. The NAS can deliver much faster speeds than this but is restricted by its 1GbE interfaces. Some NFS storage appliances help to mitigate this limitation to a degree by allowing you to configure multiple storage IP addresses. You could then split your VMs between several NFS shares, each with a different destination IP which could be routed down a different physical link. In this way a single hypervisor could saturate both links. Not so for Synology NAS unfortunately.

If raw performance is important to you, perhaps you should reconsider the xs series’ 2 x 10GbE optional add-in card. Remember though that the stock xs config (4 x GbE) will still suffer from this NFS performance capping of a single NFS connection at 1GbE. It should be noted however that multiple hypervisors accessing this storage will each be able to achieve this transfer rate, up to the maximum performance of the RAID array (around 200MB/sec for an RS2212RP+ according to the official performance figures, rising to around 10 times that figure for the xs series – presumably with the 10GbE add-in card).

As per this blog post, VMware will preferentially route NFS traffic down the first kernel port that is on the same subnet as the target NFS share if one exists, if not it will connect using the management interface via the default gateway. So adding more kernel ports won’t help. My VMware hypervisor servers use 2 x GbE for management traffic, 2 x GbE for VM network traffic, and 2 x GbE for iSCSI. Though I had enough spare NICs, connecting another pair of interfaces solely for NFS was a little overkill, especially since I know that the IOPS requirement for this storage is low. I was also running out of ports on the network patch panel in that cabinet. I did test the performance using dedicated interfaces but unsurprisingly I found it no better. In theory it’s a bad idea to use management network pNICs for anything else since that could slow vMotion operations or in extreme scenarios even prevent remote management. However, vMotion traffic is also constrained by the same limitations of ‘Route Based on IP Hash’ link aggregation policy – i.e. no single connection can saturate more than one physical link (1GbE). In my environment I’m unlikely to be migrating multiple VMs by vMotion concurrently so I have decided to use the management connections for NFS traffic too.

Benchmarking and RAID level

I found the simplest way to benchmark the transfer rates was to perform vMotion operations while keeping the Resource Monitor app open in DSM, and then referring to Cacti graphs of my switch ports to sanity check the results. The network switch is a Cisco 3750 two unit stack, with the MTU temporarily configured to a max value of 9000 bytes.

  • Single NFS share transfer rates reading and writing were both around 120MB/sec at the stock MTU setting of 1500 (around 30% CPU load). That’s almost full GbE line speed.
  • The same transfers occurred using only 15% CPU load with jumbo frames enabled, though the actual transfer rates worsened to around 60-70MB/sec. Consequently I think jumbo frames are pointless here.
  • The CPU use did not significantly increase between RAID5 and RAID6.

I decided therefore to keep an MTU of 1500 and to use RAID6 since this buys a lot of additional resilience. The usable capacity of this VMware ready NAS is now 14TB. It has redundant power fed from two different UPS units on different power circuits, and it has aggregated network uplinks into separate switch stack members. All in all that’s pretty darn good for £2,800 + VAT.

Building the Intel EMGD display driver for Sony VAIO P with fully working backlight control

Background

The Windows 7 driver for the GMA 500 GPU has not been updated for nearly two years now (v2030 from September 2010). According to this document Intel will only support and continue to maintain the EMGD driver going forward. This is a driver for Linux and Windows primarily for embedded systems, but unfortunately its target audience is system manufacturers and not end users (it’s distributed as a driver build kit). You need quite a detailed technical understanding of the hardware you’re creating the driver for, in particular the LCD panel specifications. Sony are unlikely to provide new driver builds for a three year old laptop, and it will most likely be needed for Windows 8 compatibility. I seem to remember reading that the Windows 8 Release Preview will not accept the GMA 500 Windows 7 driver. The EMGD driver does also have one big advantage in that it includes an OpenGL ICD, which the Windows 7 GMA 500 driver has always lacked.

Thanks in part to ‘viewtiful’ on the Pocketables forum having shared the DTD details for the 1600×768 panel, several people (myself included) had built prior versions of EMGD for the VAIO P, but no one was able to get the LCD backlight working correctly. The onscreen control provided by Sony Shared Library has 8 different levels, and it would turn off the backlight completely at levels 1-3. Experimenting with building new drivers is an extremely slow and painstaking process, especially when you’re not very clear on which values may need tweaking, but I’m pleased to say that I finally got all 8 brightness levels working this evening. And rather than keeping that knowledge secret, I’m sharing it here so that other Vaio users can build their own EMGD drivers for future release versions.

Download

Here is my pre-built driver:
Intel_EMGD_1.16_VaioP_Windows7_v3228_patters.zip

Method

Download and install Intel EMGD (version 1.16 from November 2012 is the latest at the time of writing). Launch the emgd-ced shortcut it has created on your desktop. This will start the java builder application.

Firstly create a new DTD called 1600×768@60Hz with settings as shown. Don’t worry about the greyed out values at the bottom of the screenshot – they’re not used.

New-dtd-for-vaio-p

Next create a new configuration called 16x7Sony like so:

EMGD-vaio-p-config1

Define the LVDS port name as MID (the name the regular GMA 500 driver uses), select the options as shown, taking care to select the custom DTD you just created:

EMGD-vaio-p-config2

In that same screen, click the Attributes button and set the Inverter Frequency to 300. Many thanks to Kirk over at the Intel Embedded Communities Forum for helping me to home in on this being the crucial setting. There are several mentions in the EMGD documentation of a reference value of 20300 which turns out to be incorrect for the VAIO P’s screen. I spent hours searching high and low in vain for a datasheet for this LCD panel (a Toshiba LT080EE04000). Eventually I discovered on the Notebook Review forum that a user called jeonghun had created an EMGD 1.10 build crucially with all eight backlight levels working for the VAIO X laptop which also has a GMA 500, though with a different 1366×768 panel. Since at this point I knew what to look for I opened his driver inf and discovered the magic value of 300. I took a guess that the motherboard-to-LCD circuitry would probably be similar for both models of VAIO.

EMGD-vaio-p-config3-attributes

Click Finish to close that window, and now click on Flat Panel Settings. The VAIO P panel is an 18bit panel. I can’t remember exactly, but I think all these values are the defaults:

EMGD-vaio-p-config4-panel-settings

Now that we’re finished with the LVDS settings (the built-in screen), click Next to move on to configuring the sDVO external monitor connection. Name it Monitor to keep it consistent with the Windows 7 GMA 500 driver. We don’t need any customization other than what is shown in this screenshot:

EMGD-vaio-p-config5-sDVO

The final screen after this relates to building a video BIOS which we aren’t interested in so leave these settings on the defaults. Once finished, create a new package called SonyVaioP:

EMGD-vaio-p-package

Finally while selecting the package, click on Generate Installation in the toolbar. This will create your zipped driver which can be found in:
C:\IEMGD\IEMGD_1_16\workspace\installation\SonyVaioP.pkg_installation\IEMGD_HEAD_Windows7

CrashPlan PROe Server package for Synology NAS

CrashPlan is a popular online backup solution, with most people using it to protect their data in the Cloud. However, by licensing CrashPlan PROe server you can be that Cloud and act as the storage provider for other client machines running CrashPlan PROe.

I was recently contracted to implement this on Synology hardware for North Bay Technologies, an IT services company in San Francisco. Instead of undertaking a careful manual install that would be difficult to maintain in future, I decided to go one better – to build a package which integrates properly with Synology DSM. I then back-merged most of the changes into the existing CrashPlan client package scripts so everything is as consistent as possible. It was agreed that I would also publish this to the community, so here it is!

CrashPlan-PROe-dashboard

I should stress at this point that although this package could technically install on Synology products with ARM or QorIQ processors, you should only use this on a model with an Intel CPU. Ideally you should equip it with more than 1GB of RAM too, because the application requires 1GB all for itself. The package repo will not advertise it to ARM systems, because they have far too little available RAM.

As with the CrashPlan client packages I have made, I have been careful to comply with Code 42 Software’s stipulation that no one redistributes their work.

CrashPlan-PROe-server-info

 

Installation

  • This package is for Intel CPUs only. It will work on an unmodified NAS, no hacking or bootstrapping required.
  • More than 1GB of RAM is recommended.
  • You will need to install my Java SE for Embedded package first. Read the instructions on that page carefully too.
  • In the User Control Panel in DSM, enable the User Homes service.
  • Purchase your Master License Key and licences (or obtain a trial key) from crashplan.com, download the PROe Server installer for Linux, and save in the public shared folder on your NAS. You should have created this folder when you installed the Java package.
  • Install the CrashPlan PROe server Synology package directly from Package Center in DSM. In Settings -> Package Sources add my package repository URL which is http://packages.pcloadletter.co.uk.
 

Notes

  • The package expects the end user to have separately downloaded the CrashPlan PROe Server installer for Linux, and it presents them with the official EULA during installation.
  • For details of TCP ports used, to help you set up firewalling and/or port forwarding please consult the requirements document.
  • Once running, CrashPlan PROe server is configured by a web dashboard on https://yourNasIP:4285/console/ This link can be found in the package info screen in Package Center in Synology DSM when the package is running.
  • Full support documentation is available here.
  • The DSM Package Center upgrade functionality allows you to move between my package versions without losing settings or data, but if you’re moving to a new CrashPlan PROe Server version you will need to do that manually via the admin web app, using the Linux downloads (with file extension .upgrade) available from Code 42 Software. Depending on how up to date your version is, you may need to update incrementally through several versions. Before you apply a Code 42 update package, you should manually install the latest Synology package for the specific PROe Server version you’re currently runnning. This will ensure the update scripts are handled correctly. So for example if you’re upgrading from 3.5.4 to 3.5.5 you should manually install cpproeserver3.5.4-merged-020.spk over the top first. You can find up to date package versions for each PROe Server build here:
    cpproeserver3.2.1.2-merged-0020.spk
    cpproeserver3.3.0.2-merged-0020.spk
    cpproeserver3.3.0.3-merged-0020.spk
    cpproeserver3.3.0.4-merged-0020.spk
    cpproeserver3.4.1-merged-0020.spk
    cpproeserver3.4.1.5-merged-0020.spk
    cpproeserver3.5.1.1-merged-0020.spk
    cpproeserver3.5.3.2-merged-0020.spk
    cpproeserver3.5.4-merged-0020.spk
    cpproeserver3.5.5-merged-0020.spk
  • The engine daemon script checks the amount of system RAM and scales the Java heap size appropriately (up to the default maximum of 1024MB). This can be overridden in a persistent way if you are backing up very large backup sets by editing /volume1/@appstore/cpproeserver/syno_package.vars.
  • As with my other syno packages, this user account password is randomized when it is created using a perl script called passgen (I followed the example of the Transmission package). DSM Package Center runs as the root user so my script starts the package using an su command. This means that you can change the password yourself and CrashPlan will still work.
  • The default location for saving backup data is set to /volume1/cpproeserver (where /volume1 is you primary storage volume) to eliminate the chance of them being destroyed accidentally by uninstalling the package.
  • The package supports upgrading to future versions while preserving the machine identity, logs, login details, and cache.
  • The log which is displayed in the package’s Log tab is actually the activity history. If you’re trying to troubleshoot an issue you may need to use an SSH session to inspect the more detailed log files which are stored in /volume1/cpproeserver/log
  • I’m not sure if it works for the PROe products, but I would really appreciate it if you could use this affiliate link when purchasing your licences (you may need to browse to the PROe section of the website using the links in the footer of that page). If this package saves you the several days worth of work I put into making it, please also consider donating using the PayPal button on the right hand side of the page. Thanks!
 

Package scripts

For information, here are the package scripts so you can see what it’s going to do. You can get more information about how packages work by reading the Synology Package wiki.

installer.sh

#!/bin/sh

#--------CRASHPLAN PROe server installer script
#--------package maintained at pcloadletter.co.uk

DOWNLOAD_PATH="http://download.crashplan.com/installs/proserver/CP_VER"
DOWNLOAD_FILE="CrashPlanPROServer_CP_VER_Linux.tgz"
DOWNLOAD_URL="${DOWNLOAD_PATH}/${DOWNLOAD_FILE}"
TGZ_FILE="CrashPlanPROServer.tgz"
#remove file extension
DOWNLOAD_FILE="`echo ${DOWNLOAD_FILE} | sed -e 's/.tgz$//'`"
EXTRACTED_FOLDER="${DOWNLOAD_FILE}"
DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_PASS="`openssl rand 12 -base64 2>/dev/null`"
DAEMON_ID="${SYNOPKG_PKGNAME} daemon user"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}/proserver/server"
VARS_FILE="${OPTDIR}/.install.vars"
ENGINE_SCRIPT="proserver"
SYNO_CPU_ARCH="`uname -m`"
[ "${SYNO_CPU_ARCH}" == "x86_64" ] && SYNO_CPU_ARCH="i686"
NATIVE_BINS_URL="http://packages.pcloadletter.co.uk/downloads/crashplan-native-${SYNO_CPU_ARCH}.tgz"   
NATIVE_BINS_FILE="`echo ${NATIVE_BINS_URL} | sed -r "s%^.*/(.*)%\1%"`"
INSTALL_FILES="${NATIVE_BINS_URL}"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
#this is where the user data will go, so it persists after a package uninstall
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/${DAEMON_USER}"
VARLOGDIR=${MANIFEST_FOLDER}/log
LOG_FILE="${VARLOGDIR}/com_backup42_app.log.0"
UPGRADE_FILES=""
UPGRADE_FOLDERS="activemq-data conf db keys"

source /etc/profile
PUBLIC_FOLDER="`cat /usr/syno/etc/smb.conf | sed -r '/\/public$/!d;s/^.*path=(\/volume[0-9]{1,4}\/public).*$/\1/'`"

  
preinst ()
{
  if [ -z ${PUBLIC_FOLDER} ]; then
    echo "A shared folder called 'public' could not be found - note this name is case-sensitive. "
    echo "Please create this using the Shared Folder DSM Control Panel and try again."
    exit 1
  fi
  
  if [ -z ${JAVA_HOME} ]; then
    echo "Java is not installed or not properly configured. JAVA_HOME is not defined. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  if [ ! -f ${JAVA_HOME}/bin/java ]; then
    echo "Java is not installed or not properly configured. The Java binary could not be located. "
    echo "Download and install the Java Synology package from http://wp.me/pVshC-z5"
    exit 1
  fi
  
  #is the User Home service enabled?
  UH_SERVICE=maybe
  synouser --add userhometest Testing123 "User Home test user" 0 "" ""
  UHT_HOMEDIR=`cat /etc/passwd | sed -r '/User Home test user/!d;s/^.*:User Home test user:(.*):.*$/\1/'`
  if echo $UHT_HOMEDIR | grep '/var/services/homes/' > /dev/null; then
    if [ ! -d $UHT_HOMEDIR ]; then
      UH_SERVICE=false
    fi
  fi
  synouser --del userhometest
  #remove home directory (needed since DSM 4.1)
  [ -e /var/services/homes/userhometest ] && rm -r /var/services/homes/userhometest
  if [ "${UH_SERVICE}" == "false" ]; then
    echo "The User Home service is not enabled. Please enable this feature in the User control panel in DSM."
    exit 1
  fi
  
  CP_BINARY_FOUND=
  [ -f ${PUBLIC_FOLDER}/${DOWNLOAD_FILE}.tgz ] && CP_BINARY_FOUND=true
  [ -f ${PUBLIC_FOLDER}/${DOWNLOAD_FILE}.tar ] && CP_BINARY_FOUND=true
  [ -f ${PUBLIC_FOLDER}/${DOWNLOAD_FILE}.tar.tar ] && CP_BINARY_FOUND=true
  [ -f ${PUBLIC_FOLDER}/${DOWNLOAD_FILE}.gz ] && CP_BINARY_FOUND=true
  
  if [ -z ${CP_BINARY_FOUND} ]; then
    echo "CrashPlan PROe server Linux installer not found. "
    echo "I was expecting the file ${PUBLIC_FOLDER}/${DOWNLOAD_FILE}.tgz "
    echo "Please visit crashplan.com, download the installer from ${DOWNLOAD_URL} "
    echo "and place it in the 'public' shared folder on your NAS."
    exit 1
  fi

  cd ${TEMP_FOLDER}
  for WGET_URL in ${INSTALL_FILES}
  do
    WGET_FILENAME="`echo ${WGET_URL} | sed -r "s%^.*/(.*)%\1%"`"
    [ -f ${TEMP_FOLDER}/${WGET_FILENAME} ] && rm ${TEMP_FOLDER}/${WGET_FILENAME}
    wget ${WGET_URL}
    if [[ $? != 0 ]]; then
      if [ -d ${PUBLIC_FOLDER} ] && [ -f ${PUBLIC_FOLDER}/${WGET_FILENAME} ]; then
        cp ${PUBLIC_FOLDER}/${WGET_FILENAME} ${TEMP_FOLDER}
      else     
        echo "There was a problem downloading ${WGET_FILENAME} from the official download link, "
        echo "which was \"${WGET_URL}\" "
        echo "Alternatively, you may download this file manually and place it in the 'public' shared folder. "
        exit 1
      fi
    fi
  done
  
  exit 0
}


postinst ()
{
  VAROPTDIR=${MANIFEST_FOLDER}/data
  VARLOGDIR=${MANIFEST_FOLDER}/log
  ETCDIR=${OPTDIR}/bin
  INITDIR=${OPTDIR}/bin
  RUNLVLDIR=${OPTDIR}/bin
  
  #create daemon user
  synouser --add ${DAEMON_USER} ${DAEMON_PASS} "${DAEMON_ID}" 0 "" ""
  
  #save the daemon user's homedir as variable in that user's profile
  #this is needed because new users seem to inherit a HOME value of /root which they have no permissions for.
  su - ${DAEMON_USER} -s /bin/sh -c "echo export HOME=\'${DAEMON_HOME}\' >> .profile"

  #extract CPU-specific additional binaries
  mkdir ${SYNOPKG_PKGDEST}/bin
  cd ${SYNOPKG_PKGDEST}/bin
  tar xzf ${TEMP_FOLDER}/${NATIVE_BINS_FILE} && rm ${TEMP_FOLDER}/${NATIVE_BINS_FILE}

  mkdir -p ${OPTDIR}
  mkdir -p ${INITDIR}
  mkdir -p ${RUNLVLDIR}
  mkdir -p ${VAROPTDIR}
  mkdir -p ${VARLOGDIR}
  cd ${PUBLIC_FOLDER}
  
  #extract CrashPlan Linux installer (Web browsers love to interfere with .tar.gz files)
  if [ -f ${DOWNLOAD_FILE}.tgz ]; then
    #Firefox seems to be the only browser that leaves it alone
    tar xzf ${DOWNLOAD_FILE}.tgz
  elif [ -f ${DOWNLOAD_FILE}.gz ]; then
    #Chrome
    tar xzf ${DOWNLOAD_FILE}.gz
  elif [ -f ${DOWNLOAD_FILE}.tar ]; then
    #Safari
    tar xf ${DOWNLOAD_FILE}.tar
  elif [ -f ${DOWNLOAD_FILE}.tar.tar ]; then
    #Internet Explorer
    tar xzf ${DOWNLOAD_FILE}.tar.tar
  fi
  
  mkdir -p ${OPTDIR}/content-custom
  mkdir -p ${OPTDIR}/installs
  mkdir ${VAROPTDIR}/backupArchives
  mkdir ${VAROPTDIR}/backupCache
  mkdir ${VAROPTDIR}/dumps
  chown -R ${DAEMON_USER} ${MANIFEST_FOLDER}
  
  #extract nested tgz archive
  cd ${OPTDIR}
  tar xozf "${PUBLIC_FOLDER}/${EXTRACTED_FOLDER}/${TGZ_FILE}"
  
  echo "#uncomment to expand Java max heap size beyond prescribed value of 1024M (will survive upgrades)" > ${SYNOPKG_PKGDEST}/syno_package.vars
  echo "#USR_MAX_HEAP=1024M" >> ${SYNOPKG_PKGDEST}/syno_package.vars
  echo >> ${SYNOPKG_PKGDEST}/syno_package.vars
  
  #create a valid identity file if there is no existing GUID
  GUID=
  if [ -f ${VAROPTDIR}/.identity ] ; then
    . ${VAROPTDIR}/.identity
  fi
  if [ "x${GUID}" == "x" ]; then
    echo -n "GUID=" > ${VAROPTDIR}/.identity
    java -cp "${OPTDIR}/lib/com.backup42.app.jar" com.code42.utils.UniqueId >> ${VAROPTDIR}/.identity
    . ${VAROPTDIR}/.identity
    if [ "x${GUID}" == "x" ] ; then
      echo "Failed to create valid server identity. Identity Path: ${VAROPTDIR}/.identity"
      exit 1
    fi
  fi
 
  #amend entries in default server config file
  sed -i "s%<OPT>%${OPTDIR}%" ${OPTDIR}/conf/conf_proe.properties
  sed -i "s%<VAROPT>%${VAROPTDIR}%" ${OPTDIR}/conf/conf_proe.properties
  sed -i "s%<VARLOGDIR>%${VARLOGDIR}%" ${OPTDIR}/conf/conf_proe.properties
  
  #save install variables which Crashplan expects its own installer script to create
  echo "" > ${VARS_FILE}
  echo "OPTDIR=${OPTDIR}" >> ${VARS_FILE}
  echo "VAROPTDIR=${VAROPTDIR}" >> ${VARS_FILE}
  echo "VARLOGDIR=${VARLOGDIR}" >> ${VARS_FILE}
  echo "ETCDIR=${ETCDIR}" >> ${VARS_FILE}
  echo "INITDIR=${INITDIR}" >> ${VARS_FILE}
  echo "RUNLVLD=${RUNLVLDIR}" >> ${VARS_FILE}
  echo "INSTALLDATE=`date +%Y%m%d`" >> ${VARS_FILE}
  echo "JAVACOMMON=\${JAVA_HOME}/bin/java" >> ${VARS_FILE}

  #remove temp files
  rm -r ${PUBLIC_FOLDER}/${EXTRACTED_FOLDER}
  
  #change owner of CrashPlan folder tree
  chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}

  echo "CrashPlan PROe Server has been installed. When you start the package a few moments of first-time initialization "
  echo "are needed before the management application will be available in your web browser. You can check the Log tab "
  echo "to discover when it has fully started. "
  echo "http://localhost:4280/console "
  echo "https://localhost:4285/console "
  echo "Please note that your clients will communicate with the server on TCP port 4282."
  
  exit 0
}


preuninst ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  exit 0
}


postuninst ()
{
  if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
    source ${SYNOPKG_PKGDEST}/syno_package.vars
  fi
  
  if [ "${LIBFFI_SYMLINK}" == "YES" ]; then
    rm /lib/libffi.so.5
  fi

  #if it doesn't exist, but is still a link then it's a broken link and should also be deleted
  if [ ! -e /lib/libffi.so.5 ]; then
    [ -L /lib/libffi.so.5 ] && rm /lib/libffi.so.5
  fi

  #remove daemon user
  synouser --del ${DAEMON_USER}
  
  #remove daemon user's home directory (needed since DSM 4.1)
  [ -e /var/services/homes/${DAEMON_USER} ] && rm -r /var/services/homes/${DAEMON_USER}
  
  exit 0
}

preupgrade ()
{
  #make sure engine is stopped
  su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
  sleep 2
  
  #if config data exists back it up
  if [ -d ${OPTDIR}/keys ]; then
    mkdir -p ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
      if [ -d ${OPTDIR}/${FOLDER_TO_MIGRATE} ]; then
        mv ${OPTDIR}/${FOLDER_TO_MIGRATE} ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig
      fi
    done
  fi

  exit 0
}


postupgrade ()
{
  #use the migrated config data from the previous version
  if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/keys ]; then
    for FOLDER_TO_MIGRATE in ${UPGRADE_FOLDERS}; do
    if [ -d ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ]; then
      cp -R ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig/${FOLDER_TO_MIGRATE} ${OPTDIR}
    fi
    done
    rmdir ${SYNOPKG_PKGDEST}/../${DAEMON_USER}_data_mig

    #make log entry
    TIMESTAMP="`date +%x` `date +%I:%M%p`"
    echo "I ${TIMESTAMP} Synology Package Center updated ${SYNOPKG_PKGNAME} to version ${SYNOPKG_PKGVER}" >> ${LOG_FILE}
    
    #daemon user has been deleted and recreated so we need to reset ownership (new UID)
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
  fi
  
  exit 0
}
 

start-stop-status.sh

#!/bin/sh

#--------CRASHPLAN PROe server start-stop-status script
#--------package maintained at pcloadletter.co.uk

DAEMON_USER="`echo ${SYNOPKG_PKGNAME} | awk {'print tolower($_)'}`"
DAEMON_HOME="/var/services/homes/${DAEMON_USER}"
OPTDIR="${SYNOPKG_PKGDEST}/proserver/server"
TEMP_FOLDER="`find / -maxdepth 2 -name '@tmp' | head -n 1`"
MANIFEST_FOLDER="/`echo $TEMP_FOLDER | cut -f2 -d'/'`/${DAEMON_USER}"
VARLOGDIR=${MANIFEST_FOLDER}/log
LOG_FILE="${VARLOGDIR}/history.log.0"
ENGINE_SCRIPT="proserver"
APP_NAME="CPServer"
SCRIPTS_TO_EDIT="${ENGINE_SCRIPT} proservermonitor"
ENGINE_CFG="${ENGINE_SCRIPT}"
LIBFFI_SO_NAMES="5 6" #armada370 build of libjnidispatch.so is newer, and uses libffi.so.6
CFG_PARAM="JAVA_MEM_ARGS"
#note that the vars in the next two string values are escaped for evaluation later on
JAVA_MEM_ARGS="-Xss128k -Xms\${JAVA_MIN_HEAP}m -Xmx\${JAVA_MAX_HEAP}m"
JAVA_GC_ARGS="-XX:+DisableExplicitGC -XX:+UseAdaptiveGCBoundary -XX:PermSize=\${JAVA_MIN_HEAP}m -XX:MaxPermSize=\${JAVA_MIN_HEAP}m"
source ${OPTDIR}/.install.vars

JAVA_MIN_HEAP=`grep "^${CFG_PARAM}=" "${OPTDIR}/bin/${ENGINE_CFG}" | sed -r "s/^.*-Xms([0-9]+)[Mm] .*$/\1/"` 
SYNO_CPU_ARCH="`uname -m`"

case $1 in
  start)
    #set the current timezone for Java so that log timestamps are accurate
    #we need to use the modern timezone names so that Java can figure out DST 
    SYNO_TZ=`cat /etc/synoinfo.conf | grep timezone | cut -f2 -d'"'`
    SYNO_TZ=`grep "^${SYNO_TZ}" /usr/share/zoneinfo/Timezone/tzname | sed -e "s/^.*= //"`
    grep "^export TZ" ${DAEMON_HOME}/.profile > /dev/null \
     && sed -i "s%^export TZ=.*$%export TZ='${SYNO_TZ}'%" ${DAEMON_HOME}/.profile \
     || echo export TZ=\'${SYNO_TZ}\' >> ${DAEMON_HOME}/.profile
    #check persistent variables from syno_package.vars
    USR_MAX_HEAP=0
    if [ -f ${SYNOPKG_PKGDEST}/syno_package.vars ]; then
      source ${SYNOPKG_PKGDEST}/syno_package.vars
    fi
    USR_MAX_HEAP=`echo $USR_MAX_HEAP | sed -e "s/[mM]//"`

    #create or repair libffi symlink if a DSM upgrade has removed it
    for FFI_VER in ${LIBFFI_SO_NAMES}; do 
      if [ -e ${OPTDIR}/lib/libffi.so.${FFI_VER} ]; then
        if [ ! -e /lib/libffi.so.${FFI_VER} ]; then
          #if it doesn't exist, but is still a link then it's a broken link and should be deleted
          [ -L /lib/libffi.so.${FFI_VER} ] && rm /lib/libffi.so.${FFI_VER}
          ln -s ${OPTDIR}/lib/libffi.so.${FFI_VER} /lib/libffi.so.${FFI_VER}
        fi
      fi
    done

    #fix up some of the binary paths and fix some command syntax for busybox 
    #moved this to start-stop-status from installer.sh because Code42 push updates and these
    #new scripts will need this treatment too
    FIND_TARGETS=
    for TARGET in ${SCRIPTS_TO_EDIT}; do
      FIND_TARGETS="${FIND_TARGETS} -o -name ${TARGET}"
    done
    find ${OPTDIR} \( -name \*.sh ${FIND_TARGETS} \) | while IFS="" read -r FILE_TO_EDIT; do
      if [ -e ${FILE_TO_EDIT} ]; then
        #this list of substitutions will probably need expanding as new CrashPlan updates are released
        sed -i "s%^#!/bin/bash%#!${SYNOPKG_PKGDEST}/bin/bash%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)nice -n%\1${SYNOPKG_PKGDEST}/bin/nice -n%" "${FILE_TO_EDIT}"
        sed -i -r "s%(^\s*)(/bin/ps|ps) [^\|]*\|%\1/bin/ps w \|%" "${FILE_TO_EDIT}"
        sed -i -r "s%\`ps [^\|]*\|%\`ps w \|%" "${FILE_TO_EDIT}"
        sed -i "s/rm -fv/rm -f/" "${FILE_TO_EDIT}"
        sed -i "s/mv -fv/mv -f/" "${FILE_TO_EDIT}"
      fi
    done

    #an upgrade script that has been launched via the web app will usually have failed until the above
    #changes are made so we need to find it and start it, if it exists
    UPGRADE_SCRIPT=`find ${OPTDIR}/upgrade -name "upgrade.sh"`
    if [ -n "${UPGRADE_SCRIPT}" ]; then
      rm ${OPTDIR}/${ENGINE_SCRIPT}.pid
      SCRIPT_HOME=`dirname $UPGRADE_SCRIPT`

      #make CrashPlan log entry
      TIMESTAMP="`date +%x` `date +%I:%M%p`"
      echo "I ${TIMESTAMP} Synology repairing upgrade in ${SCRIPT_HOME}" >> ${LOG_FILE}

      mv ${SCRIPT_HOME}/upgrade.log ${SCRIPT_HOME}/upgrade.log.old
      chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
      su - ${DAEMON_USER} -s /bin/sh -c "cd ${SCRIPT_HOME} ; . upgrade.sh"
      mv ${SCRIPT_HOME}/upgrade.sh ${SCRIPT_HOME}/upgrade.sh.old
      exit 0
    fi

    #updates may also overwrite our native binaries
    if [ "${SYNO_CPU_ARCH}" != "x86_64" ]; then
      cp -f ${SYNOPKG_PKGDEST}/bin/libjtux.so ${OPTDIR}
      cp -f ${SYNOPKG_PKGDEST}/bin/jna-3.2.5.jar ${OPTDIR}/lib
      cp -f ${SYNOPKG_PKGDEST}/bin/libffi.so.* ${OPTDIR}/lib
    fi

    #set appropriate Java max heap size
    RAM=$((`free | grep Mem: | sed -e "s/^ *Mem: *\([0-9]*\).*$/\1/"`/1024))
    if [ $RAM -le 128 ]; then
      JAVA_MAX_HEAP=80
    elif [ $RAM -le 256 ]; then
      JAVA_MAX_HEAP=192
    elif [ $RAM -le 512 ]; then
      JAVA_MAX_HEAP=384
    elif [ $RAM -le 1024 ]; then
      JAVA_MAX_HEAP=896
    #CrashPlan PROe server's default max heap is 1GB
    elif [ $RAM -gt 1024 ]; then
      JAVA_MAX_HEAP=1024
    fi
    if [ $USR_MAX_HEAP -gt $JAVA_MAX_HEAP ]; then
      JAVA_MAX_HEAP=${USR_MAX_HEAP}
    fi
    if [ $JAVA_MAX_HEAP -lt $JAVA_MIN_HEAP ]; then
      #can't have a max heap lower than min heap (ARM low RAM systems)
      JAVA_MIN_HEAP=${JAVA_MAX_HEAP}
    fi

    #reset ownership of all files to daemon user, so that manual edits to config files won't cause problems
    chown -R ${DAEMON_USER} ${SYNOPKG_PKGDEST}
    chown -R ${DAEMON_USER} ${DAEMON_HOME}    
    
    #CrashPlan PROe server will read customized vars from a separate file
    eval echo "JAVA_GC_ARGS='\"'${JAVA_GC_ARGS}'\"'" > ${OPTDIR}/.proserverrc
    eval echo "JAVA_MEM_ARGS='\"'${JAVA_MEM_ARGS}'\"'" >> ${OPTDIR}/.proserverrc
    su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} start"
    exit 0
  ;;

  stop)
    su - ${DAEMON_USER} -s /bin/sh -c "${OPTDIR}/bin/${ENGINE_SCRIPT} stop"
    exit 0
  ;;

  status)
    PID=`/bin/ps w| grep "app=${APP_NAME}" | grep -v grep | awk '{ print $1 }'`
    if [[ -n "$PID" ]]; then
      exit 0
    else
      exit 1
    fi
  ;;

  log)
    echo "${LOG_FILE}"
    exit 0
  ;;
esac
 

Changelog:

  • 020 added support for Intel Atom Evansport CPU in some new DSx14 products
  • 019 update to CrashPlan PROe Server 3.5.5, improved update handling
  • 018 update to CrashPlan PROe Server 3.4.1.5, improved update handling, fixes for DSM 4.2
  • 017 update to CrashPlan PROe Server 3.4.1, improved update handling
  • 016 update to CrashPlan PROe Server 3.3.0.4
  • 015 further fixes for the update mechanism
  • 014 created a wrapper for the ps command, and a symlink for /bin/bash which should hopefully allow server upgrade scripts from Code 42 to run
  • 013 fixed a timezone detection bug
  • 012 fixed a bug with the script editing logic which amends Code 42’s scripts to work with busybox shell tools
  • 011 updated to CrashPlan PROe Server 3.3.0.3
  • 010 intial public release
 
 

EqualLogic, iSCSI and the Windows Server 2008 R2 firewall

I recently migrated a backup server from Windows Server 2003 to Windows 2008 R2 in order to install Backup Exec 2012 at the same time. Once I had configured everything I noticed in the iSCSI Control Panel that only one path would ever connect to the array, and I was getting regular iSCSI timeouts and failures in the System Event Log, which I hadn’t seen while running Windows 2003:

iSCSI_errors

The errors were event 129:

The description for Event ID 129 from source iScsiPrt cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

event 39:

Initiator sent a task management command to reset the target. The target name is given in the dump data.

and event 9:

Target did not respond in time for a SCSI request. The CDB is given in the dump data.

Crucially these were spaced (all three together) at intervals of four minutes.

Solution

I spoke to the EqualLogic support team and, after a little while spent focusing on NIC drivers, one of the senior technicians fortunately realised that this four minute time interval coincides with the approximate frequency with which the array pings the initiators on the host and may send reconnect requests for additional path setup and load balancing. He recommended that I disable the Windows Firewall and sure enough the problem vanished. So it’s quite easy to inadvertently break iSCSI storage MPIO by making firewall settings changes to your system later on, and it’s easy to forget that these two things are related.

The problem for me was that this backup server has a NIC on the DMZ for faster backups (bypassing the hardware firewall). The pre- and post-backup job scripts enable and disable this NIC as required, but it does nonetheless need to be firewalled restrictively. In Windows 2003 the Windows Firewall can be enabled on a per NIC basis, however not in Windows 2008. Instead the firewall is configured instead in Network and Sharing Center on a per security zone basis (Domain Networks, Private Networks, Public Networks). The problem here is that the iSCSI NICs automatically end up in the Public zone, which is the most likely to be restricted. In my case, I had selected the option Block all connections including programs on the list of allowed programs. Even though the EqualLogic HIT Kit had specified an exemption rule, this was being denied.

Excluding iSCSI adapters

Relaxing the firewall in my scenario was not desirable, so I spent a while searching for a way to force the iSCSI NICs into the Private Networks zone. I couldn’t find one, though I did spot a method to exclude the NICs from the Network And Sharing Center altogether. In fact this same issue had been bothering people running VMware Workstation (because the VMware virtual NICs would get firewalled as Public Network connections), and fortunately someone had found a fix:
http://www.petri.co.il/exclude-vmware-virtual-adapters-vista-2008-network-awareness-windows-firewall.htm

The solution posted there uses a PowerShell script which automatically targets VMware adapters, but we can use the same registry modification. So, on your server use Regedit to navigate to HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}. There is a child branch here for each NIC. Find your dedicated iSCSI NICs and for each one, create a new DWORD value called *NdisDeviceType (including the asterisk) and give it a value of 1. Now disable and re-enable each modified NIC. You will see that they disappear from Network and Sharing Center, and are now unaffected by the Windows Firewall.

By setting *NdisDeviceType to a value of 1 the NIC is designated as an endpoint device and is not considered to be connecting to an external network, which is probably quite appropriate for a dedicated iSCSI storage connection. I wonder whether this is the sort of thing that ought to be automated by the HIT kit in future in fact.

Preference Order

Another thing that’s easily overlooked on servers with iSCSI storage (because it’s so well hidden) is that if you have been changing NIC configs (changing drivers, adding hardware, P2V converting, etc.) then it’s quite likely that you may have affected the preference order in which network services use physical adapters. You don’t generally want the iSCSI ones to become the higher priority ones, and I have experienced strange issues with Exchange Server in the past owing to this, as well as licence issues with copy-protected software that relies on generating a unique hardware-dependent machine ID. To set the order, open Network and Sharing Center, then click on Change Adapter Settings on the left hand side. Now hold Alt, then Advanced -> Advanced Seettings. Now you can configure the LAN NICs with higher priority:

NIC-preference-order

Corrupt Windows 7 NTFS junction points

I encountered an unusual problem recently – all Windows 7 workstations which had been built with a Microsoft Select Agreement Volume License version of Windows 7 Professional RTM using an unattended install, not via sysprep, had some sort of damage to their legacy filesystem junction points. This had prevented the installer for Kaspersky EndPoint Protection 8 and its Network Agent version 9 from running, though earlier versions had been fine. The error took Kaspersky support a very long time to pin down (several months in fact, despite them having detailed MSI installer logs), and it eventually transpired that many of the links to maintain legacy OS compatibility like C:\Documents and Settings -> C:\Users, or C:\Users\All Users -> C:\ProgramData on these affected systems were resolving to some kind of temporary mounted WIM image path, within the folder C:\Users\ADMINI~1\AppData\Local\Temp\mnt\wim.

This folder no longer existed, and nor was there any phantom mounted WIM image, so any attempt to access the damaged links would fail (in Kaspersky’s case the issue was C:\ProgramData\Application Data). I still have no idea what may have caused this. More recently the unattended install I designed uses Windows 7 Enterprise SP1, with no changes to the core build scripting, and systems built from this do not exhibit this issue. This might suggest it was a problem with Windows itself, and if so then my script to fix the damage could be useful for others.

The repair script requires SetACL.exe which is an extremely versatile tool, but which is syntactically very difficult to use! I compared the ACLs on a clean system, noted the link type (they’re not all junctions, there is one symlink), and whether or not there were deny permissions which prevent recursion on links which resolve to their parent folder e.g. C:\ProgramData\Application Data -> C:\ProgramData. The links are deleted and recreated, but only on systems that are detected to need the fix (see the highlighted line for that logic). If you set line 6 to “set DEBUG=echo” you can test the output before actually invoking the repair commands.

@echo off

:: Windows 7 junction point/symlink fix script
:: patters 13/03/2012

set DEBUG=
setlocal

dir /aL C:\ProgramData | find /I "C:\Users\ADMINI~1\AppData\Local\Temp\mnt\wim\" && (
  call :junction /J "C:\Documents and Settings" "C:\Users" deny
  
  call :junction /J "C:\ProgramData\Application Data" "C:\ProgramData" deny
  call :junction /J "C:\ProgramData\Desktop" "C:\Users\Public\Desktop" deny
  call :junction /J "C:\ProgramData\Documents" "C:\Users\Public\Documents" deny
  call :junction /J "C:\ProgramData\Favorites" "C:\Users\Public\Favorites" deny
  call :junction /J "C:\ProgramData\Start Menu" "C:\ProgramData\Microsoft\Windows\Start Menu" nodeny
  call :junction /J "C:\ProgramData\Templates" "C:\ProgramData\Microsoft\Windows\Templates" deny
  
  call :junction /D "C:\Users\All Users" "C:\ProgramData" deny
  call :junction /J "C:\Users\All Users\Application Data" "C:\ProgramData" deny
  call :junction /J "C:\Users\All Users\Desktop" "C:\Users\Public\Desktop" deny
  call :junction /J "C:\Users\All Users\Documents" "C:\Users\Public\Documents" deny
  call :junction /J "C:\Users\All Users\Favorites" "C:\Users\Public\Favorites" deny
  call :junction /J "C:\Users\All Users\Start Menu" "C:\ProgramData\Microsoft\Windows\Start Menu" nodeny
  call :junction /J "C:\Users\All Users\Templates" "C:\ProgramData\Microsoft\Windows\Templates" deny
  call :junction /J "C:\Users\Public\Documents\My Music" "C:\Users\Public\Music" deny
  call :junction /J "C:\Users\Public\Documents\My Pictures" "C:\Users\Public\Pictures" deny
  call :junction /J "C:\Users\Public\Documents\My Videos" "C:\Users\Public\Videos" deny
  
  call :junction /J "C:\Users\Default User" "C:\Users\Default" deny
  call :junction /J "C:\Users\Default\Application Data" "C:\Users\Default\AppData\Roaming" deny
  call :junction /J "C:\Users\Default\Cookies" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Cookies" deny
  call :junction /J "C:\Users\Default\Local Settings" "C:\Users\Default\AppData\Local" deny
  call :junction /J "C:\Users\Default\My Documents" "C:\Users\Default\Documents" deny
  call :junction /J "C:\Users\Default\NetHood" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Network Shortcuts" deny
  call :junction /J "C:\Users\Default\PrintHood" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Printer Shortcuts" deny
  call :junction /J "C:\Users\Default\Recent" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Recent" deny
  call :junction /J "C:\Users\Default\SendTo" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\SendTo" deny
  call :junction /J "C:\Users\Default\Start Menu" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Start Menu" deny
  call :junction /J "C:\Users\Default\Templates" "C:\Users\Default\AppData\Roaming\Microsoft\Windows\Templates" deny
  call :junction /J "C:\Users\Default\Documents\My Music" "C:\Users\Default\Music" deny
  call :junction /J "C:\Users\Default\Documents\My Pictures" "C:\Users\Default\Pictures" deny
  call :junction /J "C:\Users\Default\Documents\My Videos" "C:\Users\Default\Videos" deny
  call :junction /J "C:\Users\Default\AppData\Local\Application Data" "C:\Users\Default\AppData\Local" deny
    
  call :junction /J "C:\Users\Default\AppData\Local\Temporary Internet Files" "C:\Users\Default\AppData\Local\Microsoft\Windows\Temporary Internet Files" deny
) || echo Legacy filesystem junction points/symlinks are fine.

::odd permissions for this one, so I'm leaving it out
::call :junction /J "C:\Users\Default\AppData\Local\History" "C:\Users\Default\AppData\Local\Microsoft\Windows\History" deny

goto :eof

:junction
:: %1 = type (junction or directory symlink)
:: %2 = junction/symlink path
:: %3 = target path
:: %4 = set the deny permission or not

::delete old junction point
%DEBUG% rmdir "%~2"

::create new junction point
%DEBUG% mklink %1 "%~2" "%~3"

::set owner to SYSTEM
%DEBUG% setacl -on "%~2" -ot file -actn setowner -ownr "n:SYSTEM"

:: we need to stop inheritance of permissions before we make changes. This must be done with
:: a separate commandline entry owing to the order in which SetACL.exe processes its arguments.
%DEBUG% setacl -on "%~2" -ot file -actn setprot -op "dacl:p_c;sacl:p_c"

::clear ACL and set permissions
%DEBUG% setacl -on "%~2" -ot file -actn clear -clr "dacl,sacl" -actn ace -ace "n:Everyone;i:np;p:read_ex" -actn ace -ace "n:SYSTEM;i:np;p:full" -actn ace -ace "n:Administrators;i:np;p:full"

::add directory listing deny permission for recursive paths if needed
if "%4"=="deny" %DEBUG% setacl -on "%~2" -ot file -actn ace -ace "n:Everyone;s:n;m:deny;i:np;p:list_dir"

Shortcut to testing VMware Auto-Deploy

It’s very useful to have a VMware lab environment particularly for training needs, and outside of larger enterprises this is generally difficult to achieve owing to the high time cost of setting one up. VMware Auto Deploy remedies this problem since it allows very rapid provisioning of ESXi hosts via PXE network boot.

I wrote these notes for myself since I’m getting some hands-on experience of it to upgrade my VCP4 qualification to VCP5. EDIT – I passed the exam :). I got the information about the Auto Deploy commands from this concise guide. My post here covers how to deploy this if you already use Windows DHCP and WDS, and in such a way that uses but does not impact your production VMware infrastructure. The key to this is using the vCenter Server Appliance to create (with just a few clicks) a separate vCenter on a different VLAN, hosted on your existing ESXi hosts. I think some of the required file downloads may need you to have a proper paid for vSphere licence.

  • Create a new VLAN, add it to your ESXi hosts’ VM Network trunk ports. Define a gateway for the new subnet on your router. In a Cisco environment you would also need to configure your DHCP server as an IP-helper. That’s beyond the scope of this document, and I assume that you probably already run a DHCP server that manages multiple subnets.
  • Obtain and install the VMware vCenter Server Appliance. Log in to the vmware.com site, go as if to download vCenter server, then after the EULA you’ll see multiple files offered (the .iso you would normally download, then further down the .ovf and .vmdk files for the appliance).
  • Install this appliance on your existing ESXi environment. Follow the instructions on screen and select Embedded Database. The default credentials are root and vmware. I reduced the VM’s RAM from 8GB down to 4GB. Ideally put this VM in the new VLAN you created.
  • This appliance already has the Auto Deploy option installed, but it’s disabled (in the Appliance WebUI go to Services > Auto Deploy). Impressively, the appliance also hosts the web services for the vSphere Webclient by default, which you can access from a browser on https://YourApplianceIP:9443/vsphere-client. Given the hassle-free setup for all this, I think it’s highly likely I will transition the production environment over to this instead of running vCenter on Windows.
  • If you’re already using WDS and Microsoft DHCP, make sure you’re defining options 66 and 67 (boot server & boot filename) on a per-scope basis (not in Server Options), that way we can configure different behaviour for the new subnet.
  • Launch vSphere Client and target it at your vCenter Server Appliance. From Home, select Auto Deploy. Select Download TFTP Boot Zip. Extract these files to the root level of your REMINST share on your WDS Server. These bootstrap files are designed to live at the TFTPRoot, not in the boot folder with the other WDS boot files (file paths that are loaded after the initial boot are hard-coded, not relative).
  • By default, the WDS TFTP server only allows read access to \tmp and \boot. You need to use Regedit to edit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSTFTP\ReadFilter. Append “\*” to the existing value. Where I work the WDS server only serves boot images since I favour unattended install rather than image-based deployment, so I’m ok with this relaxing of the default security.
  • Create a DHCP scope for your new subnet (including a reservation for your vCenter Server Appliance). Define option 67 as undionly.kpxe.vmw-hardwired
  • Now you can connect some test physical machines with Intel VT support to the new VLAN you created and set them to boot to PXE first in the boot order. Test this – you should get a warning that there are no associated ESXi images for this machine. Note down the displayed model name (in my case OptiPlex 745).
  • Download an ESXi offline bundle (go as if to download ESXi from vmware.com, accept the EULA, and it’s further down on the download page).
  • Use a Windows 7 machine (which already has PowerShell 2.0) to install vSphere PowerCLI.
  • Connect the PowerCLI to the vCenter Server Appliance (help on the cmdlets is available here):
    Connect-VIServer 172.16.10.100
  • Add the offline bundle as a “depot”:
    Add-EsxSoftwareDepot C:\users\me\Downloads\ESXi500-201111001.zip
  • Within this depot there are several images, and we need their names:
    Get-EsxImageProfile | fl
  • Create a deployment rule:
    New-DeployRule -Name "FirstTimeBoot" -Item "ESXi-5.0.0-20111104001-standard" -Pattern "model=OptiPlex 745"
  • Activate it:
    Add-DeployRule -DeployRule FirstTimeBoot

Now your ESXi test machines can be woken with WOL, will boot ESXi and will automatically bind to vCenter Server Appliance, where they can be managed. Perfect for home study using a VPN connection. Here is the official VMware Auto Deploy Administrator’s Guide.