An Open Access Peon

17 May 2021

Azure B2C and ServiceNow Integration

Azure Business to Consumer (B2C) provides a self-service user management tool, which applications can authenticate to using OpenID Connect or (with config files) SAML. Users can in turn authenticate using third party social providers like Google, Facebook or any other service that supports OpenID Connect. Authenticating users against a corporate Azure Active Directory (Azure AD) requires using OpenID Connect.

As ServiceNow supports OpenID Connect to authenticate users it can use Azure B2C for supporting user self sign-up and third party social providers.

This tutorial provides the steps to set up a new Azure B2C tenant and ServiceNow connection using the Multi SSO plugin.

Azure

In Azure Portal create a new Azure Active Directory B2C resource:


Create a new B2C tenant (if you do not already have one):


Complete name and location details as needed and add to an appropriate Resource Group:


Review and create the new B2C.

From the new tenant open Azure AD B2C:


Under User Flows:


Create a New user flow.

Select Sign up and sign in:


And Recommended:


Give the flow a memorable name, as it will be used in URLs later:


Modify the attributes and token claims as needed. We’ll use the email address to register users so you need to at least collect email and pass addresses as an application claim:


You can modify the attributes at any time in the User Flow.

Go back to Azure AD B2C and open App registrations:


Create New registration and give the app a reasonable name.

Change Supported account types to Accounts in this organizational directory only:


Redirect URI is your instance URL with /navpage.do:


Click Register.

In the app Overview copy the Application (client) ID:


Click Endpoints:


Note the OpenID Connect well known endpoint:


Replace <policy-name> with your User flow name e.g. B2C_1_sign_up_sign_in. Access the URL in a Web browser to confirm you have the right URL:


Under Certificates & secrets:


Create a new Client secret:


And note the secret Value.

 

You should now have three pieces of information to use in the ServiceNow configuration:

  • Application ID
  • OpenID Connect Well-known endpoint
  • Client Secret value

Offering Azure AD as a Login Option

If you want to offer Azure AD as a login option via B2C follow these instructions:
https://docs.microsoft.com/en-gb/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant?pivots=b2c-user-flow

ServiceNow

If not already installed, install the Integration - Multiple Provider Single Sign-On Installer plugin (com.snc.integration.sso.multi.installer).

Under SSO properties:


Enable SSO:


Under Identity Providers:


Create a new IdP:


Of type OpenID Connect:


In the Import form populate the information using the information you captured from the Azure B2C set up:


In your new Identity Provider make it active and a login option:


There’s a few different ways to handle Single Sign-on, refer to ServiceNow documentation for what the options will do.

Under the OIDC Entity section open the entity:


Under OAuth Entity Scopes change the OAuth scope:


Important! Modify the scope from openid to:

5c6e2fbc-1a5b-41c0-a63f-b899c567fbf9 openid offline_access profile email

Replace 5c6e2fbc… with your Application ID.

This scope is required to generate an access token, without which ServiceNow will reject the OpenID Connect. The error you would see in the logs is "missing parameter access_token".

Click Update to save the scope changes.

 

Go back to the Identity Provider. Open the OIDC Provider Configuration:


Modify the User Claim to emails:


Click Update.

Go back to the Identity Provider.

Enable User Provisioning. In this example I’m using Google ID Token Example but you may wish to create a new Data Source:


Warning! This will give any B2C user itil role which is useful for testing, you probably want to use “public” or similar.

Save and open the Google ID Token Example data source.

Open the Google ID Token Example transform:


In the Field Maps modify email to u_emails_0:


Open your ServiceNow instance and you should now get a B2C login option:


Click Log in with B2C. Create an account and login.


09 August 2016

Using Oracle Thin Driver with Client/Server Authentication SSL

Oracle Database server supports SSL on the wire encryption plus client and server authentication. This can be a bit tricky to set up and after much exhaustive searching I've never found a complete description of the steps to set up the client-side configuration (or at least, using a Tomcat Resource).

The following instructions describe how I set up SSL authentication (/encryption) from a Tomcat WebApp to Oracle Database server.

You must use a recent OJDBC6.JAR. Older versions (I can't work out which) have a bug relating to parsing passwords from connection properties. Download the latest OJDBC6 or OJDBC7 from Oracle. Place ojdbc6.jar into tomcat/lib.

You will need the "keytool" from the Java JDK or JRE.

Create a new keystore with self-signed certificate:

keytool -genkey -alias %computername% -keystore keystore.jks -storepass changeme -validity 3650

When prompted you probably want to use your machine name for the "What is your first and last name" (the CN= bit of the addressing).

Export the self-signed certificate:

keytool -export -keystore keystore.jks -storepass changeme -alias %computername% -file %computername%.cer

Provide this to your Oracle DBA who will import the certificate into the database trust store (wallet). The DBA should provide you a certificate chain for the server. Import these into your Java keystore:

keytool -importcert -noprompt -keystore keystore.jks -storepass changeme -file SERVER.CRT

In your Tomcat server.xml create a new Resource entry under GlobalNamingResources:



In your WebApp's context.xml create an appropriate mapping:


You can then connect to and use your new database connection using:


Errors

It is generally easier to debug SSL configuration problems using the sqlplus client tool. You will need an Oracle wallet (orapki tool) to do this, which I won't cover in this blog post. The following may help diagnose problems with your client configuration though.
  • Format error - check connectionProperties doesn't contain spaces/newlines
  • IO Error: The Network Adapter could not establish the connection - 1) check your have the correct passwords for trustStorePassword and keyStorePassword 2) try a newer version of ojdbc6.jar / confirm you are using the verison you expect 3) this is a genuine network/hostname problem 4) Ensure oracle.net.wallet_location isn't specified in catalina.properties or elsewhere (seems to override connectionProperties)
  • IO Error: NL Exception was generated - check server.xml's resource url attribute is formatted correctly

24 April 2016

Using Tvheadend as a SAT>IP Server

The goal here is to set up Tvheadend (TVH) as a SAT>IP Server. This allows you to stream satellite streams to tablets and other devices - at least to the limit of the number of lnb connections you have.

I followed the Ubuntu instructions at https://tvheadend.org/projects/tvheadend/wiki/AptRepository to install TVH and then http://docs.tvheadend.org/configure_tvheadend/ to get the basic configuration set up.

Unfortunately the documentation for configuring SAT>IP is quite sparse and leaves out one important detail if you expect to use it with general SAT>IP clients: SAT>IP must be configured to use rtsp port 554 but TVH out of the box will switch to 9983.

Enabling Networks for SAT>IP

Go to Configuration - DVB Inputs - Networks. For each network you want to use change the SAT>IP Source Number to 1 (other values are documented).

Enabling SAT>IP Server

Go to Configuration - General. In the SAT>IP Server section enter 554 as the port number and Save Configuration.

Port-Forwarding RTSP

If, after enabling SAT>IP Server, you see the following error in the log (click the double-down arrow bottom right):

2016-04-24 20:19:25.568 satips: RTSP port 554 specified but no root perms, using 9983

Then you will need to allow clients to connect to TVH on port 554. While you can run TVH as root it is probably easier to create a port forward:

sudo iptables -t nat -A PREROUTING -i eth0 -p udp --dport rtsp -j REDIRECT --to-port 9983

Following these steps allowed me to stream channels from TVH using the Elgato App from both Android and iOS.

01 September 2015

WebDav Client Certificate Challenge

If you have Office/Word 2010 and open a document via Internet Explorer a WebDav interrogation of the Web site will be performed. This is to determine whether Office can write back the document to the site (e.g. as you would use on Sharepoint).

We encountered a hard to trace issue with this interrogation. Opening a document from IE would result in a challenge to pick a client certificate. Clicking cancel past these dialogues would allow the document to still open. Tracing that this was something to do with WebDav was easy enough as we could see the WebDav requests coming into the server - indicated by an OPTIONS request at the folder level with subsequent PROPFINDs. These are issued by the user agents:

Microsoft Office Protocol Discovery
Microsoft Office Existence Discovery
Microsoft-WebDAV-MiniRedir/
DavClnt

What was confounding us was that the issue did not occur on our test system. After a comprehensive inspection of the IIS (7.5) server no differences between the production and test systems could be identified.

The issue was eventually uncovered by inspecting the SSL connection coming from both sites. On production "openssl" indicated a different hostname in the certificate to the site being requested. This is because our production system has multiple host aliases, with appropriate redirects in place. The test system has a single host.

The production aliases are served using Server Name Indication (SNI). This works fine in all modern browsers, including the version of IE being used. Where it doesn't work is in Microsoft's WebDav implementation. When the WebDav client interrogated the server it was getting a certificate name mismatch. Rather than indicating a server certificate error it instead requested a client certificate!

The solution for this site was to change the default certificate to the official Web site name. I don't know if there's a solution for multiple sites sharing the same https connection with SNI.

24 August 2015

Routing Wireless LAN over a VPN

The goal of this set up is to create a Wireless LAN (WLAN) that gets routed over a VPN. In this way we can create multiple WLANs that route to different places, enabling clients to pick a network to connect to. My specific use-case was to allow house users to connect to a virtual German network or to the local network.

My set up has Cisco WAP121 Small Business Access Points and a Ubuntu 14.04 server (acting as DHCP and router). My local network just consists of unmanaged switches, which will forward all VLAN traffic to every port.

Make sure Ubuntu is configured to forward IP traffic.

The new VLAN will use id 20 and the IP range 192.168.20.0-255.

In the Cisco control panel under Wireless, Networks I created a new Virtual Access Point with a VLAN ID of 20 (default is 1).

The rest of the configuration is in the Ubuntu server.

The VPN in use is OpenVPN but I expect these instructions would work with any device-based VPN. In my system this is "tun0". Having configured and confirmed the OpenVPN connects and working correctly it then needs some additional steps. Because I'm not routing all traffic through the VPN I needed to add route-nopull to the OpenVPN configuration. Additional up and down scripts are required to configure firewall and routing, so the following needs adding to the OpenVPN configuration:
route-nopull
script-security 2
up /etc/openvpn/scripts/vpn-up.sh
down /etc/openvpn/scripts/vpn-down.sh
vpn-up.sh:
#!/bin/sh
# block all incoming connections i.e. block access to this box
iptables -A INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPT
iptables -A INPUT -p icmp -i $dev -j ACCEPT
iptables -A INPUT -i $dev -j DROP
# NAT connections over the tunnel
iptables -t nat -A POSTROUTING -s 192.168.20.0/24 -o $dev -j MASQUERADE
# Route between eth0.20 and the tunnel
iptables -A FORWARD -i eth0.20 -o $dev -j ACCEPT
iptables -A FORWARD -i $dev -o eth0.20 -j ACCEPT
# Start routing traffic over the tun
ip route add default table 20 dev tun0
ip rule add from 192.168.20.0/255.255.255.0 table 20
vpn-down.sh:
#!/bin/sh
# stop routing traffic to the tun
ip rule del from 192.168.20.0/255.255.255.0 table 20
ip route del default table 20 dev tun0
# block all incoming connections i.e. block access to this box
iptables -D INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPT
iptables -D INPUT -p icmp -i $dev -j ACCEPT
iptables -D INPUT -i $dev -j DROP
# NAT connections over the tunnel
iptables -t nat -D POSTROUTING -s 192.168.20.0/24 -o $dev -j MASQUERADE
# Route between eth0.20 and the tunnel
iptables -D FORWARD -i eth0.20 -o $dev -j ACCEPT
iptables -D FORWARD -i $dev -o eth0.20 -j ACCEPT
We need to create an interface that will listen to VLAN 20. Install vlan interfaces support:
apt-get install vlan
Create a new file /etc/network/interfaces.d/eth0:
auto eth0.20
iface eth0.20 inet static
        address 192.168.20.1
        netmask 255.255.255.0
        vlan-raw-device eth0
We need a DHCP server to provide addresses to the VLAN:
apt-get install isc-dhcp-server
We only want DHCP offered on the new VLAN so we tell dhcpd to bind only to eth0.20 in /etc/default/isc-dhcp-server:
# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?
#       Separate multiple interfaces with spaces, e.g. "eth0 eth1".
INTERFACES="eth0.20"
Create a new subnet entry to offer IPs for in /etc/dhcp/dhcpd.conf:
subnet 192.168.20.0 netmask 255.255.255.0 {
        range 192.168.20.100 192.168.20.200;
        option routers                  192.168.20.1;
        option subnet-mask              255.255.255.0;
        option broadcast-address        192.168.20.255;
        option domain-name              "localdomain";
        option domain-name-servers      8.8.8.8;
}
Note: 8.8.8.8 is Google's public DNS, you may wish to allow DNS lookups from the Ubuntu server.

Reboot to bring the new interfaces and services up. You can manually create a VLAN interface with ip as follows:
ip link add link eth0 name eth0.20 type vlan id 20
ip addr add 192.168.20.1/24 brd 192.168.20.255 dev eth0.20
ip link set dev eth0.20 up

25 October 2013

Passenger Fusion-Nginx Node.js App Server for Ubuntu

Download and install Passenger Fusion from https://www.phusionpassenger.com/download#open_source. Install nginx and passenger:

sudo apt-get install nginx-full passenger

Download and install node.js from https://launchpad.net/~chris-lea/+archive/node.js/:

sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs


In /etc/nginx/nginx.conf uncomment passenger_root and passenger_ruby:

passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;


In /etc/nginx/sites-available/default add at the end of the server {} block:

server {
...
       passenger_enabled on;
       location /app1/ {
               root /home/user/webapps/app1/public;
               passenger_enabled on;
       }
}


Follow the instructions at https://github.com/phusion/passenger/wiki/Node.js to create the app structure under /home/user/webapps/app1:

./public
./tmp
./tmp/restart.txt [touch this to re-deploy application]
./app.js


app.js is a normal Node.js Web application (something that uses 'http.createServer().server.listen()').

Restart nginx:

sudo /etc/init.d/nginx restart

Then test your app:

curl http://localhost/app1/

21 May 2013

Generating Certificate (CSR) requests

Here's a short script to generate a server key and PKCS #10 Certificate Request for use with https:

#!/bin/sh

HOSTNAME=$1

if [ "$#" -ne 1 ]; then
echo "Usage: $0 <hostname>" >&2
exit 1
fi

if [ ! -f ${HOSTNAME}.key ]; then
openssl genrsa -out ${HOSTNAME}.key 2048
fi

cp cert.cfg ${HOSTNAME}.cfg
echo >> ${HOSTNAME}.cfg
echo "cn = ${HOSTNAME}" >> ${HOSTNAME}.cfg

certtool --generate-request \
--load-privkey ${HOSTNAME}.key \
--outfile ${HOSTNAME}.csr \
--template ${HOSTNAME}.cfg

if [ -f ${HOSTNAME}.csr ]; then
echo ${HOSTNAME}.csr
fi


This requires a cert.cfg that provides the basic information for your organisation:

# X.509 Certificate options
#
# DN options

organization = "University of Weevils"

unit = "Department of Creepy Crawlies"

locality = "Winchester"

state = "Hampshire"

country = GB

# Whether this certificate will be used to sign data (needed
# in TLS DHE ciphersuites).
signing_key

# Whether this certificate will be used for a TLS client
tls_www_client

# Whether this certificate will be used for a TLS server
tls_www_server

# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key


The resulting csr should be sent to your certificate authority for signing into a certificate (crt).