5G - A New Level of Flexibility in Network Architectures

Guest Blogger - Joe Neil, Network Architect - Microchip, Inc

5G will enable a combination of advanced network and radio services that together will introduce a new level of flexibility in network architectures. Current models of static DRAN / CRAN networks complemented by DAS and Small Cells, will be synthesized into very high speed sliced, virtualized networks, served by the Ethernet Elastic RAN (eERAN) that will enable true 5GNR wideband low-latency services to be delivered from any size of radio cluster. Simultaneously, frequency aligned FDD radio networks will be replaced or complemented by phase-based TDD access to enable management of the more precise Time Alignment Error at the Radio Interface that is required by the 5GNR Advanced coordinated radio services.

Moreover, 5G will require agility and flexibility: the evolution from a monolithic BBU to the split CU / DU, from proprietary fiber to Ethernet on the fronthaul, and to extremely tight phase alignment between RU, will require a high performance, high reliability timing architecture. The ability to rapidly add bandwidth on demand by upgrading the switches and routers in the RAN, to dynamically change the RF TAE requirements according to spectrum and frequency band availability, to deploy eERAN on demand where and as needed, to seamlessly re-engineer or change upper layer (OTN, L2, L3) services – all without impacting the critical end to end timing of the network, will also be a fundamental attribute of a well-designed, robustly engineered 5G timing infrastructure.

In this discussion we will examine the different options for engineering advanced timing services in this evolving paradigm. 

5G Impact on Synchronisation

As 5G introduces new concepts & new network architectures that will have a deep impact on core, transport and radio network design, how will synchronisation be affected? More than half of the Communication Services Providers surveyed at Mobile World Congress this year said they were in the early stages of rolling out 5G, so timing & synchronisation options should be at the centre of architecture discussions now! In the same survey respondents said the most important requirement in switch/router hardware was "quality 5G timing & synchronisation".

5G's RAN functional decomposition brings some of the philosophies of NFV & SDN to 5G and its use of CPRI/eCPRI to fronthaul data brings its own synchronisation challenges, with IEEE & ITU liaisons looking at timing aspects of this part of the network. Time Alignment Errors proposed in the new 3GPP specifications for 5G are in some cases many orders of magnitude smaller than current requirements, with 65ns being the tightest requirement. (It is worth noting however that some of the most stringent tolerances are intra-node only, not transport/network wide requirements.) Many operators are still in the process of moving from frequency-only sync to providing time & phase sync distribution for existing 4G/LTE-A services.

Who said Timing didn't just arrive from thin air?

We're living in a fast moving world, and network design and delivery fits that description admirably. Gone are the days where the timing of network was an integral part of network design and rollout – the technologies that required a level of timing to work effectively are now legacy and beyond.

Nodes need to be in phase with each other to deliver ever demanding services

Today's network technologies, such as Carrier Ethernet, do not in themselves need extensive timing deployment to operate, but they do often have services at the edge of the network that need timing performance to work. We're moving now from the transport of frequency (generally to ensure mobile network Node Bs have sufficient frequency stability to hold their channels and bandwidth), to LTE-A and beyond where nodes need to be in phase with each other to deliver ever demanding services.

As an 'old school" timing geek I've often criticised (under my breath of course!) network designers for believing that "sync is just there", as if it appears out of the ether. Well as we are trialling and deploying the IGM-1100i more and more we are actually seeing this happening.

The picture is typical of what we can deliver. This is really "spot the IGM", sitting on a cable tray in a breeze block equipment room with no windows and metal doors. Without any assistance (the IGM supports A-GPS) this IGM locked to GNSS as a Primary Reference Time Clock (PRTC) seeing 8 satellites; allowing it to operate as a PTP GrandMaster within 100ns of UTC!

Typical cost of deployment? One person for at most one day (maybe half a day depending on the extent of the deployment). Roof rental cost for GNSS installation – ZERO. Crane hire to get access to the roof? ZERO. Cost of making good fire breaks? ZERO. Landlord and management costs for planning and delivery of GNSS Antenna installation on the roof and through the building? ZERO.

Time from thin air – I can see it now!!

Time from Thin Air

IGM-1100i sitting on a cable tray in a breeze block equipment room with metal doors and no windows


So, what's the problem? It's sometimes referred to as the "millennium bug of GPS" but what exactly is it?

The GPS System provides navigation & timing by using synchronised messages sent from a constellation of satellites orbiting the earth to a user's receiver - whether that be a dedicated navigation receiver (i.e. "Sat Nav") or a specialised timing receiver. The synchronised messages contain a time code, using a timescale known as "GPSTIME" that is encoded in its own unique way. So here comes the problem; within that encoding system the week number can only be represented by 1024 unique values.... so, every 1024 weeks the code repeats. The opportunity for a problem occurs when the week number changes from 1023 back to zero, or "rolls over". This has happened before, on 21st Aug 1999 as GPS "week zero" originally started 6th Jan 1980. So, the next one's coming pretty soon... 6th April 2019 to be precise.

So what might happen? Well, older receivers might get confused and think it's 1999 again... or worse 1980! As this has happened once already the GPS industry as a whole should be prepared for it, and many receivers will carry on working correctly without issue. You should only be concerned if you have a receiver that has been in continuous operation for more than ~10 years without any firmware update or if your receiver forms part of a critical system - can you afford to just wait and see?
First port of call should be the GPS receiver's manufacturer (or their service/support representatives) to see what they have to say, for example if the receiver and its software have been tested to handle the rollover without issue. If that's not possible then you may have to resort to performing provocative testing with GPS simulators that can provide the exact conditions that will occur in a week rollover scenario.

How any failure will manifest itself is difficult to say; as it's a software bug it depends on so many other things unique to each manufacturer's implementation... some receivers may carry on apparently unaffected apart from the wrong date, some may reboot/restart and sort themselves out whilst others may just fail and not recover.

For more information:

GNSS Vulnerabilities linkedin group:

Introduction to Timing and Synchronisation

Timing dependent applications rely on their clocks being correct within the set tolerances required by that technology.In order to achieve this, timing is transferred between clocks using various methods that enable time or frequency, phase or time of the required quality to be available to the application.The activity of transferring time between clocks is known as synchronisation.

Timing and synchronisation quality can be described by two factors:accuracy and stability.Accuracy is the measure of how closely the clock compares to a reference or target value; this can be a global standard for time such as UTC or a desired frequency such as 10MHz; stability is the measure of the variance of the clock when observed over a period of time.

In order to quantify the quality of timing and synchronisation, many different timing metrics exist.These are specifically designed for the timing technologies they are measuring and the characteristics of the signal that are of interest.Measurements must be made against another clock of known and dependable quality during relevant network or environmental conditions and over a period of time long enough to fully characterise the measured clock. 

By accepting you will be accessing a service provided by a third-party external to https://www.chronos.co.uk/