LTE Advanced may finally mean the wireless future is here

For many of us who work remotely, when it comes to connectivity when out and about, Starbucks has always been a safe haven!  Great coffee, great smell and most of all good free connectivity.  Whilst I loved unlimited connectivity that the Three network gave me, I found that in many city centre locations connectivity was patchy.  As part of my fleet upgrade, and the move to the new 3rd Generation LTE enabled Apple Watch, I had to move over to EE.  Whilst I’m still missing the ulimited data cap of Three, the upgrade to speed and range has been incredible.  Even though I am still using an iPhone 6S as this current time, I’m seeing 200-300meg connections nearly everywhere I go.

Today I did my typical Sunday morning thing of jumping in the car heading to a Starbucks for a bit of research, study and surfing and found that the Starbucks in question’s Wifi was very patchy.  I’ve observed this behaviour before in other shopping centers and airports, where a free WiFi network seems to interfere with the instore wifi.

Here’s a look at a typical test ping page ( that I use to test here in the UK, packet loss, ping all over the place:

Typical Coffee Shop Wifi

Typical coffee shop wifi

That same test running on 4G, after all my cloud services had updated, settling into a standard 60ms ping, stable as a rock!

4G LTE Advanced

4G LTE Advanced on tether to iPhone

Which got me to thinking- as 4G, 4.5G and 5G services roll out, are we geninely going to enter a time when WiFi becomes a backup?  If the providers can get their fingers out and create entire device packages (as EE have for the Apple Watch I’d be interested) – maybe £30 per month for the phone, and 5 quid for each subsquent device you want to add (iPad/ Watch/Laptop) – I’d certainly be first in the queue!

After 5 years it’s time to replace my Drobo with a Drobo (FS ->5N2)

I have been using a Drobo FS file store for over 5 years.  I purchased back in 2011/2012 and started out with 2 x 2TB hard disks (Which at the time seemed a huge amount).  Over the years I’ve added drives and replaced drives both because of failure and upgraded.  I’ve got from 2TB to 4TB, to 10 TB to it’s current state of 13.54 TB.

Screen Shot 2017 08 26 at 13 44 37

In all this time I’ve lost no data and managed to survive 2 drive failures. Obviously the reasons for buying a Drobo vs the likes of Synology or QNAP are well documented.  Whilst I would have  had to buy a QNAP system and 4 match 2TB disks all in one go, with the Drobo I’ve been able to buy disks at the time I needed them.  I checked the receipt recently and when I first bought the system my 2TB drives were just over £250 each.  Fast forward 5 years, and a decent 4TB drive is now just over £100.  This flexibility is just too convenient and leads to a near complacency.

So why am I changing?

Firstly, I’m moving to another Drobo and secondly I’m not replacing but augmenting.  My plan is to add one of the new 5N2s with 3 x Segate 10TB Ironwolf Pros and a 240GB m.sata accelerator SSD.  In due course I’ll likely migrate most of the FS data onto the new 5N2, but I will have a period where I run them side by side.  I’ve still not fully decided if I’ll sell on the FS and move over wholesale.  Certainly moving 15 TB (the contents of my FS & some USB attached drives on my server) will take a good few days to move- so I’ll maybe put that decision off until later in the year.

What’s new in 5N2

I’d looked at both the D810N (the 8 bay SMB system) and the 5N2 which, when I began my decision making, was very new.  I decided to go for the 5N2, after considering the facts and realising that the 5N2 would give me 40TB usable storage for under £2200 at today’s prices (and hopefully less as drives drop in cost).  Secondly the SSD acceleration and tiered storage is now available in the 5 bay format, which should make for much greater disk performance.

Drobo 5N2 m.SATA Accelerator.jpg

Coupled with the new dual gigabit network ports, and the latest generation Seagate Ironwolf drives with their 210MB read speeds, and I hope the whole system should be way more flexible and give me the capacity I need for the next 4 or 5 years.

Drobo 5N2 dual gigabit

Glasgow – Bars, Cafes and Restaurant – WiFi for the workers – I – League Table

I’m happy to see that Wireless internet is becoming prevalent in a lot of business in Glasgow.  I’m not talking about crappy 2 or 3 meg services, but useful speeds 20+ Meg.  I’m starting a league table for these.

Redmond’s of Dennistoun (currently #1)

304 Duke St, Glasgow G31 1RZ

Down: 93.2

Up: 13.7

Redmonds of Dennistoun - broadband

Redmonds of Dennistoun











St Lukes

Calton, 17 Bain St, Glasgow G40 2JZ (currently #2)

Down: 27.9

Up: 13.6

St Lukes, Gallowgate

St Lukes, Gallowgate


Making the web come to you rather than you going to it – RSS with Feedly

For sometime now I’ve found myself feeling overwhelmed by the sheer amount of information from sites, twitter and social networks.  There’s too much noise, too many articles, and too many chances to easily miss something.  I realised after taking an audit of my daily browsing history that I visit much the same sites in sequence.

I’d used RSS some time ago but not had it stick.  I decided the best course of action was to take a look at the best RSS readers on the market, figure out what I needed and design, test and utilise a new workflow.

My hard requirements were that due to the fact that I work across 3 Macs, an iPad Pro and an iPhone the selected solution must fufill the following:

  • Allow me to utilise synchronisation across all devices, across starred, read and new articles
  • Poll regularly and use rich notifications
  • Allow automations (via IFTTT or AppleScript)

I quickly learned that as well as the application of choice, there was a further decision to make, namely the syndication provider.  In order to get the simple syncing across of devices I decided to go with Feedly due to good reviews and wide set of sites covered.  Furthermore, my logic was that if I built my RSS configuration in Feedly, I was free to move between different reader tools, and utilise my content in other ways (perhaps on an Apple TV or such in future).



I started by signing up for Feedly (with a Feedly account) – I’ve moved away from SSO via Google and Facebook due to some really bad security breaches on my old Facebook account this year.  Once setup, I installed the Safari plug in (for easier configuration) and added a few basic sites and categories to my feed.  When I was happy with this, it was time to move onto the RSS setup. I ended up setting on Reeder 3 for OSX and Reeder for iOS, both from the same developer, the Feedly account is simply entered on both, and bang within a few minutes all was synced.  I set my feeds to refresh on the Macs every 5 minutes, and on the iOS devices every 15 (to get a better battery life).

With everything setup I tried it out.  I found it fastest to get through the big morning glut of mail on my Mac.  Simply using arrow keys to quickly fly through the list, down for next, up for previous, right arrow to see the article itself and left to cycle back.  My work flow is pretty simple, I star things I might want to share , review or blog on later, and by using the starred view column I keep articles I want to reread via starring. The strangest feeling is the fact that I can check all my sources in 5 minutes rather than the original 60.  At first that feels like somehow I am being cheated, but I think in actual fact what I’m seeing is the time saving.  I now quickly scan my task bar for the Reeder 3 icon which shows the number of articles, and quickly skip through these between calls or work.

Feedly Running

Feedly Running

I’m pretty happy with the solution, and it certainly has freed me up greatly.  I really like how pages with YouTube videos from video channels are also included and how I can access those videos from within the tool – it’s very neat!  So give it a go, stop being a slave to your web browser and make the web deliver to you.

How to utilise a gig – Project 2: Rip and replace the router


The second thing you do after you get used to the sheer speed of a gig symmetric service is start to wonder why you can only hit 850/900 mbit regularly on most speed tests.  Shortly after that you wonder why you struggle to max out a torrent beyond 30meg/sec.  This was the strange set of events that led me to do a bit of a deep dive and find an unexpected gem.

So yeah, I went from 15-> 820meg speed tests, and yet the geek in me wondered, where had those other 180 meg gone?  Now, I give you that at these speeds its a purely theoretical, intellectual exercise, but nonetheless where had that missing 20percent gone.  Research quickly finds that around about 900 meg is realistic with overhead on any gig service, so we’re hunting 80-100 meg.  A bit of fishing around reveals that the slow torrent speeds are likely as a result of a router that can’t handle the number of connections, that and the fact that most soho routers (ZTE in the case of Hyperoptic) just can’t give you full line speed, port to port all the time.  One of the nice things about the Hyoperoptic service is the fact that it’s basically a RJ-45 jack with DHCP.  Plug in any router and it will come up. So which router?  After several reads on forums, I went for a Ubiquiti Networks ER3-Lite, I’ve loved Vyatta long before it was purchased by Brocade, and the reviews of hardware professed it’s ultra high speed, with low overhead, indeed you can offload most of the processing to it’s DSPs.

300px-Edgerouter-original-packageI picked mine up at amazon ( link to for ER3-Lite) at under £70.  Out the box it’s a pretty sturdy wee beast.  Once I’d ordered I started googling, and found two fantastic videos on setup , the best one is probably this one .

The device is pretty simple to setup if you understand networking in anyway, plug a cable into the eth0 port, you’ll get a dynamic IP (if not just set your ip in the range) then http to the routers IP:   Once in run the setup assistant (shown in the video) there’s various different ways to configure the 3 network ports on the device.  I went for WAN+2LAN, this makes eth0 the LAN1, eth1 the WAN and eth2 the WAN2.  I have a subnet for my lab so will eventually use this segregation to router VPN traffic in.

Performance once configured is pretty breathtaking.  I only tweaked the MTU on the LAN  port (eth0) as my home switch supports Jumbo Frames:

Heres a wee graphic of it doing a quick 861, though I can regularly burst 920-970 depending on time of day and location of end server.  In short- buy it!


How to utilise a gig – Project 1: Home Media Server [Part 1] – Hardware and Software

Before reading this article, I should admit that I’ve had the Plex server for a number of years.  The additional features afforded by the Gigabit WAN link are primarily for sharing media  outwit the local network, with family and on remote devices (Apple TVs in the case of this article).

So as I wrote in my last article, the next few weeks will chronicle some of the services that you can enable using the higher speeds afforded by gigabit connectivity.  This first article is a how-to on setting up a Media server.

So let’s take a look at some of the background as to what you might want a media server for, what facilities it can offer, what we need to buy and costs of software.  At basic a media server is a computing box, window, linux or OSX based, which runs dedicated media server software, and allows connected devices such as set top boxes, tablets, phones or Smart TVs to connect to it.  The media is loaded onto the servers storage, and when browsing to the device via app/plugin the service behaves much like your own personal Netflix.  The key difference is you have the actual media files, ripped/downloaded etc and have full control.. The more feature rich implementations also offer facilities for playlists, programme art etc.

So how do we do this?

Software/Media Centre Platform

The first question is the media server software that we are going to use. There’s plenty on the market.  Two of the most popular are XBMC and Plex, and whilst the applications come from a similar root, they achieve their aim in very different ways.

XBMC is very much a media player whereas Plex can be best thought of a client-server model, where you run a server with your media (this could even be your laptop) and then use a client on end devices (Apple TV/ Roku / Xbox One / SmartTVs).  Whilst XBMC is more configurable, and xxl_plex-logo-1200-80more feature rich, it is definitely not as stable, or s well supported as Plex.
The company behind Plex also offer paid subscriptions which make sharing your server remotely over the web, and various out tasks more easily achieved.  It’s for this reason that I moved to Plex some  years ago, and don’t miss XBMC or it’s stability/complexity issues.

Because of this approach we are going to dedicate a box to the task of running the media server, our Plex server will house all our media, and run the software to power the endpoints which in our case will run Plex client on  Apple TV4s.


So with our software chosen, the next consideration is the “box” on which we’ll run our Plex Media server.  Your choices here are much more various.

Mac Mini in hands

2012+ Mac Mini- it’s really this small

My box of choice is unsurprisingly a Mac, and whilst I am frequently accused of bias when it comes to hardwire in this case the selection is justified.  The Mac Mini (2012-) is one of the smallest, neatest and feature rich boxes that you can use as a media server.  It can be specified up to quad core i7 processors, 16GB of RAM and can have multiple 1gigabit networking ports.  On top of that it comes with built in wifi, and runs a fan less design.

Front and back of boxes, small but plenty of expansion options!

Front and back of boxes, small but plenty of expansion options!

As you can see by the pictures on the left here, it’s really small, with a one piece design.  Even nicer is the fact that there’s no bulky power supply – just a simple infinity connector.  On the back you’ll find a multitude of ports to connect network cables, USB3.0 and thunderbolt devices, meaning you are spoilt for choice on how to about adding storage as your media server grows, be that network or direct attached disks.

New the boxes range in price from £400-1000  depending on the specification.  The i5 processor versions are dual core, whereas the i7 versions offer quad core processing.  Memory wise all the boxes will take 16GB RAM, and given the prices of RAM these days, a bump to 16GB should be the first thing you do.   The box can take internal drives, and there are various options to install 2 disks, more on that later.

I’d recommend shopping around on eBay or Gumtree, just as an example I found these two example:

Gumtree – 2012 i5 2.5Ghz with 4GB RAM for only £295

Ebay 2013 i5 2.3Ghz with 8GB RAM for only £349

Either of these boxes will be ideal, and the memory upgrade is now under £60 for 16GB and can be found here at

So that’s us got our hardware for around £350.  One wee note, if you are planning to run the server headless (if you want to pop it into a cupboard and hide it away) then due to a quirk on a lot of Intel based boxes, you’ll want to buy one of these HDMI dongles.  What does this dongle do? – Not to get to in depth, if your Mac (or PC) detects that it doesn’t have a monitor plugged in, it will not enable hardware acceleration of graphics, this results in f_cked up image display when you remotely access the box. Just know if you want it headless order one of these dongles.

Installing PLEX

First give your Mac Mini a fixed IP address.  You can do this easily by going into the control panel.  This means you’ll easily be able to troubleshoot if you have any errors further down the line.

If you’ve used a Mac before, installing Plex is exactly the same as any other Mac application. Head on over to Plex’s site and download the latest Media Server software.  You’ll get a DMG which you want to copy onto your Server box and install.  You’ll know that it’s installed as you’ll get an icon that look like a right pointing arrow in the top bar of your Mac (next to the clock).

There’s a few good housekeeping tasks – such as setting power management to keep the disks spinning for an hour, and disable any auto sleep functions.  I also tend to set the Plex Application to start open on startup (just in the case you should loose power).  If you only have a single user login for the server, then you can also set the box to login automatically (not the most secure but manageable).

A decent Mac Mini (post 2012) with 16GB RAM and Core i5 or above will happily serve a large household – I have managed to run 16 1080p streams to end devices, and transcode (will be explained later) to multiple iPads and the server still wasn’t running at anything close to 100percent CPU.  Better yet, because we’re using OSX we can add server functions and have this box perform other tasks for us.

In Part 2 of this walk through we’ll configure Plex, import Media and view the library from a client.


Well it’s here – Hyperoptic and it’s immense

If you’ve read the blog for any amount of time, or even if you have stumbled onto it recently, then you’ll know that a big project in my life has been trying to improve the poor state of broadband provision in the budding in which I live.  As I’ve already touched on, I don’t live in the boonies, in a new city (with aluminium cables) or really a place that has any excuse to have poor broadband provision.  Even with this said, and given the fact I live in central Glasgow ( a very large city) I am unable to get anything beyond 17Megabit from BT.  Taking matters into my own hands I petitioned our residents association in the building and have successfully had Gigabit broadband (ultra band?) fitted from Hyperoptic.  First fit was in December, and Fibre was brought in late January.  On Wednesday ( 30th March) – we went live, and oh boy!

Here’s a speed test from 9.00AM on Thursday.  I was interested to see if there were any contention issues at peak – and hitting 750mbit a second down and 880 up is very close to perfect.  In fact even at these peak times I’m able to run these tests time after time (10 in fact) and hit the same results.

Screen Shot 2016-03-31 at 09.23.52






Better than that, at off-peak times (3am) I have managed to hit at 940mbit down / 970 mbit up – incredible.

So that’s the headline figures out the way, so now what?  And that’s a good question.  Things to cross off-

  • iPlayer, Netflix and Amazon are all instant, as in YouTube 4K.
  • Downloads are as fast as the server can give you (50mbit – 100mbit on the better ones).
  • Content Distribution Network (CDN) backed services are ridiculous (hard to measure) – for fun downloading an album or playlist from Apple Music to have offline is nearly instant!
  • There’s no slow down EVER!
  • Zero Packet Loss (eat that Sky!)

I’ll be doing some other experiments with this amazing new service in due course, VPN, site to site, Plex serving.  That to come, I’m off to download more of the web! 🙂



Journey to Gig/Gig broadband with Hyperoptic

I live in the centre of a major Scottish city, which has according to wikipedia has an urban population of 1.75 million people.  That’s no small number of people.  I just assumed when I bought my place, that being in a large new development that when infinity began rolling out that it’d be available pretty much after launch.  Well thanks to the mystery of BT Planning, I’ve sat for the last 5 years and watched as promised date after promised date has slipped.

Late in 2014 it became abundantly clear that BT had no interest in enabling us, all the vague excuses piled up and I realised it was going to have to be a DIY approach for our building.  First the facts- our building is just over 7 years old, is a steel frame building with inbuilt ducting and services and false ceilings.  All these things make it super easy to install services.  Secondly, the location and pricing of the building means that there’s a lot of young professionals, and home workers (i.e. a captive market).  To me it made no logical sense that the building hadn’t been enabled (I’ve actually come to get information that makes me believe it’s not a technical reason but a planning/political reason holding us back).

So i began my search, which in all honesty I was expecting to be fruitless.  I mean- if BT the largest telecoms group in the country can’t get us cable who can!  My first port of call was Virgin, where I reached out and got positive noises from they cable my street team.  I registered my interest, had a few neighbours do the same, and had good conversations with one of their outreach managers on twitter.  After 6 months though we had no committal, no in person engagement and weren’t going anywhere.

Frustrated I looked at the market again and came across Hyperoptic – a company which on paper our building was purpose made.  I have to confess that due to travel I didn’t make the first approach, but a neighbour picked up the baton and made the initial contact.  I got re-involved in summer of 2015, after which time I assumed the role of Hyperoptic Champion for the building, and pushed the project.  I started by joining our residents association, primarily for a single task (i.e. getting the fibre fitted) and began to work with the excellent John McCabe at Hyperoptic.  Our residents association really lacked social media skills, a domain name and a Facebook forum later I had the ability to outreach to folk and begin campaigning.

I started by speaking to neighbours I new in person, however it’s a big development so I probably only knew 20percent of the folks in the development. Hyperoptic require residents to register interest on their page, and have a very transparent tracker.  To assist I was sent marketing flyers and materials, which allowed me to do a mail drop.  I took the basic materials, and made them a bit more personal by branding and explaining a few things on an accompanying letter.  I mail dropped them in early September, and by the end of that same month we were showing the adequate number of registrations to move forward.

Hyperoptic were true to their word and surveyed the building, reporting back that they’d be able to fit the service with no issues.  The only fly in the ointment was the lack of service hatches in the ceilings outside the units. Hyperoptic offered a solution of installing these hatches and picking up the cost. (The truth being that we should have had these in anyway).  As is typical we had the doubters who though that these small hatches would “spoil the look” but to be frank I never accepted that given that we have smoke detectors, lights and so forth already there.  Anyway, a vote sent out by the factor saw no significant objection and we were green light to get the way leave signed to get Hyperoptic in and fitting.

The internal cabling is high quality Cat6e, I’m not 100percent sure of the switch infrastructure but effectively the fitted network should be able to support 10gigs and beyond (technology permitting).  Cabinets were installed in the basement levels where the switches are housed, and cat6E cabling was run first below in the carparks, and then up through the 3 blocks of the building (10 storeys).  The install is first class, to the point where in a straw poll of folks visiting my house I said- do you notice anything? – to which the answer was – what?

Fibre install has been a bit of a bear, with the contracts (BT) wasting dates, and making delays, however we have the fibre into the basement, and jointing is to go ahead.   I’m already confident this service will have massive benefits to our residents.  To go from a 14meg internet connection which is beginning to struggle to support multiple over the top media services, to a symmetric 1gig service is going to be a huge change, and I’m going to blog about how it affects the day to day of what and how we utilise media.   Stay tuned for more articles.

VMUG Advantage – a fantastic new resource for VMware study




I’ve been in a bit of a refresh cycle since I had my recent promotion in my company.  Over the years I’ve attained a ton of professional certifications, and it was about time to make the difficult decisions, on what to maintain, what to improve, what to add and what to cut.  Having some time to myself to allocate to technical development, I’ve been keen to bring my VMware skills up to the new VCP6.0 level ( I have aspirations of takin this even higher) and top bring in desktop virtualisation, virtualised networking and orchestration.   Part of my challenge is that I’ve already done the classroom training, and have a ton of hands on for VCP510, but need to sit the technical exam.  So the questions in order are: 1. how to attain VCP550 2. How to upgrade 550->610 and then what I can build on in terms of NSX.  Secondary concerns are upping these core fields to higher levels.

I have a ESXi box in the house, an investment sometime ago means I have a Dell T620, with a health 12 x 2.5Ghz Cores, 5TB of storage & 64GB of RAM.  Nesting labs is relatively easy, but as I want to build a lab that will last , I don’t want to be affected by the constant 60 day nag message that come with using the evaluation version.  I’ve been googling and came across the VMUG Advantage, which on paper seems to solve these issues by:

  • Offering 365day evaluation licensing for all VMware tools including Sphere, Horizon, Orchestration and VSphere
  • Offering discounts on specialised training.
  • Forum access to a global pool of VMware professionals
  • VMware blessing.

All of the above for $200US, and to be clear that includes full legal licensing for the full VMware portfolio.  If you think it seems a bit too good to be true, you’re not alone, but I can assure you having signed up that this site is completely legit.  I wish more vendors took this approach to training for software.  The signup process is painless, though be warned that the time between signing up for the program, and receiving the logins and links is about 48 hours, as it’s a manually verified process.  Rest assured the support teams that administer the service are fast and really friendly- and yes to repeat, you get FULL LEGAL LICENSES!  in hindsight I wish I’d not signed up on a Saturday as I waited the whole weekend to be activated, but that’s a minor grizzle at something I did on my side.

I’ve now got a VCP6.0 box (Sphere 6.0 and ESXi6.0) and a nest lab running an evaluation version of Sphere 5.5 for my first training course.  Nicely, it means that I can practice the upgrade from 5.5->6.0 when the time comes, and at the same time evolve my box to my needs.  I plan to also go through the VCP-NSX 610 course – I qualify for this due to my CCNP and having attainted my VCP, again though, that’s a secondary.  I will be posting more specifics on my lab setup and experiences but thought the sheer brilliance of this offer just had to be shared- take it up- you will not be disappointed!.  I’ve also loaded the box up with a copy of Cisco’s VIRL tool with a 30 node license- an Openstack underpinned service giving you full access to to Cisco images, allowing you to out do GNS3.

I once heard it said that the best money that Cisco ever spent on development was creating their training curriculum, and ensuring that IOS images leaked on the web.   It’s all very well being able to buy a few cheap catalysts and a couple of routers and sit your CCNA or even your CCNP, but what happens when you need to self study DC gear, or advanced firewalls with 6 figure price tags?  Often  this lack of hands on results in paper passed students, who have GNS3’d the sh_t out of it but lack real world skills.  The only other option is piracy (which I don’t condone) but why I can understand.

By allowing students legal, affordable and full access to your technologies, you are putting in place the foundations for a successful support structure, both in companies, and in the wider market as a whole.  VMware (for supporting the VMUG advantage) and Cisco (with VIRL) should both be celebrated for taking these important steps into enabling driven students and individuals to attain knowledge of their products.

So over to you Citrix, Juniper , ALE and others

New tracts and the first bite of the Elephant

I’ve been working in the networking and WANX space now for about 8 years, and after a while, new technology gets to the point where you think- I’ve seen this before.  I’ve never been very good at doing the same thing again and again, life is way too short and I get bored way too easily.  With that in mind, I’m making more of a push into something I started on a few years back- virtualization, both of compute and networking.  I did my VCP training course at Caledonian University in Glasgow a few years back, and I loved it, but didn’t give it the time I needed.  With my new role in my company, I’m back on the books and determined to get into something new.

My plan is to sit my VCP550 in late March this year, and then spend April and May learning the transition from VSphere 5.x-> 6.0 and take my VCP610 exam.  After that I plan to complete my CCNP R&S (I’ve already done my SWITCH exam, so it’s a case of sitting ROUTE and TSHOOT) to get that box complete, it’s not like I don’t have the knowledge just about getting to doing the exam.  As well as all this, I have Riverbed to recertify, am dropping my Ipanema and Aruba, and somewhere along the line I need to pick up my ITIL Expert (already became ITIL foundation certified in February).

The long term aim is definitely to move into the DC and SDN space – I have an idea to undertake the VMware NSX certification line, and eventually look to be :





ITIL Expert

It’s going to be a busy year – something I look forward to!  But you don’t eat an elephant in one bite, so first things first – VCP550 24th March  🙂