An Open Access Peon

09 August 2016

Using Oracle Thin Driver with Client/Server Authentication SSL

Oracle Database server supports SSL on the wire encryption plus client and server authentication. This can be a bit tricky to set up and after much exhaustive searching I've never found a complete description of the steps to set up the client-side configuration (or at least, using a Tomcat Resource).

The following instructions describe how I set up SSL authentication (/encryption) from a Tomcat WebApp to Oracle Database server.

You must use a recent OJDBC6.JAR. Older versions (I can't work out which) have a bug relating to parsing passwords from connection properties. Download the latest OJDBC6 or OJDBC7 from Oracle. Place ojdbc6.jar into tomcat/lib.

You will need the "keytool" from the Java JDK or JRE.

Create a new keystore with self-signed certificate:

keytool -genkey -alias %computername% -keystore keystore.jks -storepass changeme -validity 3650

When prompted you probably want to use your machine name for the "What is your first and last name" (the CN= bit of the addressing).

Export the self-signed certificate:

keytool -export -keystore keystore.jks -storepass changeme -alias %computername% -file %computername%.cer

Provide this to your Oracle DBA who will import the certificate into the database trust store (wallet). The DBA should provide you a certificate chain for the server. Import these into your Java keystore:

keytool -importcert -noprompt -keystore keystore.jks -storepass changeme -file SERVER.CRT

In your Tomcat server.xml create a new Resource entry under GlobalNamingResources:

In your WebApp's context.xml create an appropriate mapping:

You can then connect to and use your new database connection using:


It is generally easier to debug SSL configuration problems using the sqlplus client tool. You will need an Oracle wallet (orapki tool) to do this, which I won't cover in this blog post. The following may help diagnose problems with your client configuration though.
  • Format error - check connectionProperties doesn't contain spaces/newlines
  • IO Error: The Network Adapter could not establish the connection - 1) check your have the correct passwords for trustStorePassword and keyStorePassword 2) try a newer version of ojdbc6.jar / confirm you are using the verison you expect 3) this is a genuine network/hostname problem 4) Ensure isn't specified in or elsewhere (seems to override connectionProperties)
  • IO Error: NL Exception was generated - check server.xml's resource url attribute is formatted correctly

24 April 2016

Using Tvheadend as a SAT>IP Server

The goal here is to set up Tvheadend (TVH) as a SAT>IP Server. This allows you to stream satellite streams to tablets and other devices - at least to the limit of the number of lnb connections you have.

I followed the Ubuntu instructions at to install TVH and then to get the basic configuration set up.

Unfortunately the documentation for configuring SAT>IP is quite sparse and leaves out one important detail if you expect to use it with general SAT>IP clients: SAT>IP must be configured to use rtsp port 554 but TVH out of the box will switch to 9983.

Enabling Networks for SAT>IP

Go to Configuration - DVB Inputs - Networks. For each network you want to use change the SAT>IP Source Number to 1 (other values are documented).

Enabling SAT>IP Server

Go to Configuration - General. In the SAT>IP Server section enter 554 as the port number and Save Configuration.

Port-Forwarding RTSP

If, after enabling SAT>IP Server, you see the following error in the log (click the double-down arrow bottom right):

2016-04-24 20:19:25.568 satips: RTSP port 554 specified but no root perms, using 9983

Then you will need to allow clients to connect to TVH on port 554. While you can run TVH as root it is probably easier to create a port forward:

sudo iptables -t nat -A PREROUTING -i eth0 -p udp --dport rtsp -j REDIRECT --to-port 9983

Following these steps allowed me to stream channels from TVH using the Elgato App from both Android and iOS.

01 September 2015

WebDav Client Certificate Challenge

If you have Office/Word 2010 and open a document via Internet Explorer a WebDav interrogation of the Web site will be performed. This is to determine whether Office can write back the document to the site (e.g. as you would use on Sharepoint).

We encountered a hard to trace issue with this interrogation. Opening a document from IE would result in a challenge to pick a client certificate. Clicking cancel past these dialogues would allow the document to still open. Tracing that this was something to do with WebDav was easy enough as we could see the WebDav requests coming into the server - indicated by an OPTIONS request at the folder level with subsequent PROPFINDs. These are issued by the user agents:

Microsoft Office Protocol Discovery
Microsoft Office Existence Discovery

What was confounding us was that the issue did not occur on our test system. After a comprehensive inspection of the IIS (7.5) server no differences between the production and test systems could be identified.

The issue was eventually uncovered by inspecting the SSL connection coming from both sites. On production "openssl" indicated a different hostname in the certificate to the site being requested. This is because our production system has multiple host aliases, with appropriate redirects in place. The test system has a single host.

The production aliases are served using Server Name Indication (SNI). This works fine in all modern browsers, including the version of IE being used. Where it doesn't work is in Microsoft's WebDav implementation. When the WebDav client interrogated the server it was getting a certificate name mismatch. Rather than indicating a server certificate error it instead requested a client certificate!

The solution for this site was to change the default certificate to the official Web site name. I don't know if there's a solution for multiple sites sharing the same https connection with SNI.

24 August 2015

Routing Wireless LAN over a VPN

The goal of this set up is to create a Wireless LAN (WLAN) that gets routed over a VPN. In this way we can create multiple WLANs that route to different places, enabling clients to pick a network to connect to. My specific use-case was to allow house users to connect to a virtual German network or to the local network.

My set up has Cisco WAP121 Small Business Access Points and a Ubuntu 14.04 server (acting as DHCP and router). My local network just consists of unmanaged switches, which will forward all VLAN traffic to every port.

Make sure Ubuntu is configured to forward IP traffic.

The new VLAN will use id 20 and the IP range

In the Cisco control panel under Wireless, Networks I created a new Virtual Access Point with a VLAN ID of 20 (default is 1).

The rest of the configuration is in the Ubuntu server.

The VPN in use is OpenVPN but I expect these instructions would work with any device-based VPN. In my system this is "tun0". Having configured and confirmed the OpenVPN connects and working correctly it then needs some additional steps. Because I'm not routing all traffic through the VPN I needed to add route-nopull to the OpenVPN configuration. Additional up and down scripts are required to configure firewall and routing, so the following needs adding to the OpenVPN configuration:
script-security 2
up /etc/openvpn/scripts/
down /etc/openvpn/scripts/
# block all incoming connections i.e. block access to this box
iptables -A INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPT
iptables -A INPUT -p icmp -i $dev -j ACCEPT
iptables -A INPUT -i $dev -j DROP
# NAT connections over the tunnel
iptables -t nat -A POSTROUTING -s -o $dev -j MASQUERADE
# Route between eth0.20 and the tunnel
iptables -A FORWARD -i eth0.20 -o $dev -j ACCEPT
iptables -A FORWARD -i $dev -o eth0.20 -j ACCEPT
# Start routing traffic over the tun
ip route add default table 20 dev tun0
ip rule add from table 20
# stop routing traffic to the tun
ip rule del from table 20
ip route del default table 20 dev tun0
# block all incoming connections i.e. block access to this box
iptables -D INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPT
iptables -D INPUT -p icmp -i $dev -j ACCEPT
iptables -D INPUT -i $dev -j DROP
# NAT connections over the tunnel
iptables -t nat -D POSTROUTING -s -o $dev -j MASQUERADE
# Route between eth0.20 and the tunnel
iptables -D FORWARD -i eth0.20 -o $dev -j ACCEPT
iptables -D FORWARD -i $dev -o eth0.20 -j ACCEPT
We need to create an interface that will listen to VLAN 20. Install vlan interfaces support:
apt-get install vlan
Create a new file /etc/network/interfaces.d/eth0:
auto eth0.20
iface eth0.20 inet static
        vlan-raw-device eth0
We need a DHCP server to provide addresses to the VLAN:
apt-get install isc-dhcp-server
We only want DHCP offered on the new VLAN so we tell dhcpd to bind only to eth0.20 in /etc/default/isc-dhcp-server:
# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?
#       Separate multiple interfaces with spaces, e.g. "eth0 eth1".
Create a new subnet entry to offer IPs for in /etc/dhcp/dhcpd.conf:
subnet netmask {
        option routers        ;
        option subnet-mask    ;
        option broadcast-address;
        option domain-name              "localdomain";
        option domain-name-servers;
Note: is Google's public DNS, you may wish to allow DNS lookups from the Ubuntu server.

Reboot to bring the new interfaces and services up. You can manually create a VLAN interface with ip as follows:
ip link add link eth0 name eth0.20 type vlan id 20
ip addr add brd dev eth0.20
ip link set dev eth0.20 up

25 October 2013

Passenger Fusion-Nginx Node.js App Server for Ubuntu

Download and install Passenger Fusion from Install nginx and passenger:

sudo apt-get install nginx-full passenger

Download and install node.js from

sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs

In /etc/nginx/nginx.conf uncomment passenger_root and passenger_ruby:

passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;

In /etc/nginx/sites-available/default add at the end of the server {} block:

server {
       passenger_enabled on;
       location /app1/ {
               root /home/user/webapps/app1/public;
               passenger_enabled on;

Follow the instructions at to create the app structure under /home/user/webapps/app1:

./tmp/restart.txt [touch this to re-deploy application]

app.js is a normal Node.js Web application (something that uses 'http.createServer().server.listen()').

Restart nginx:

sudo /etc/init.d/nginx restart

Then test your app:

curl http://localhost/app1/

21 May 2013

Generating Certificate (CSR) requests

Here's a short script to generate a server key and PKCS #10 Certificate Request for use with https:



if [ "$#" -ne 1 ]; then
echo "Usage: $0 <hostname>" >&2
exit 1

if [ ! -f ${HOSTNAME}.key ]; then
openssl genrsa -out ${HOSTNAME}.key 2048

cp cert.cfg ${HOSTNAME}.cfg
echo >> ${HOSTNAME}.cfg
echo "cn = ${HOSTNAME}" >> ${HOSTNAME}.cfg

certtool --generate-request \
--load-privkey ${HOSTNAME}.key \
--outfile ${HOSTNAME}.csr \
--template ${HOSTNAME}.cfg

if [ -f ${HOSTNAME}.csr ]; then
echo ${HOSTNAME}.csr

This requires a cert.cfg that provides the basic information for your organisation:

# X.509 Certificate options
# DN options

organization = "University of Weevils"

unit = "Department of Creepy Crawlies"

locality = "Winchester"

state = "Hampshire"

country = GB

# Whether this certificate will be used to sign data (needed
# in TLS DHE ciphersuites).

# Whether this certificate will be used for a TLS client

# Whether this certificate will be used for a TLS server

# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.

The resulting csr should be sent to your certificate authority for signing into a certificate (crt).

12 September 2012

Migrating svn to git with sub-directories

On EPrints we have a slightly odd layout in our svn repository:


This works well enough with svn where you can checkout a sub-directory but git only allows you to check out the entire repository - branches are real branches rather than just named directories. The correct git usage is to use separate repositories for "docs", "extensions" and "system" and place the contents for each one at the repository root.

The goal of the migration to git is therefore to move trunk/system up to trunk/, branches/3.3/system up to branches/3.3 and so on for branches still in use. (For my own sanity I'm going to ignore tags.)

One approach might be to "git svn clone" the entire svn tree and then move the elements of system/* up a directory. The downside to this approach is every file in the repository will then be touched with that movement, losing the ability to (trivially) see the last time a file was modified.

I also tried use the svn svn-dump-reloc tool to move directory contents up into their parent. That drove me down a path of despair of broken historical file movements and duplicated directory creations (because moving system/trunk/ to trunk/ duplicates the initial trunk/ creation).

The eventual approach taken was to use git-svn's ability to clone a sub-directory and magically place it in the root of the new git repository. I started with trunk/:

git svn clone -A users.txt eprints

And the same with the 3.3 branch:

git svn clone -A users.txt 3.3

I can then create an empty branch for 3.3 (kudos to in my git trunk clone:

cd eprints
git checkout --orphan 3.3
git rm -rf .

git can pull in the content of a remote repository like so:

git remote add -f 3.3 ../3.3/
git merge -s ours 3.3/master

Tidying up:

git remote rm 3.3
git checkout master

This repository can then be pushed up to github using their standard instructions:

git remote add origin
git push -u origin master

And to push the branch(es):

git push origin 3.3