top of page
Dots

The Secrets Hidden in CPRI

If you want to start a riot in a room full of radio engineers, start talking about CPRI and watch those conference muffins fly.   


Photo by Maurício Mascaro on Pexels.com


The Common Public Radio Interface (CPRI) was defined by 3GPP to connect two functions in the cellular radio network: it links the Remote Radio Units (RRU) with the Base Band Units (BBU) over fibre.

CPRI’s first challenge is the extremely strict jitter and latency budgets – it’s a serial interface and round-trip latency must be kept in the range of 100 µs.

The second issue is that it has become a largely proprietary interface, making mixing and matching BBU and RRU vendors a nearly impossible task – leading to complaints about vendor lock-in.

It turns out that the data carried in the CPRI interface can be used to detect and predict network issues, as long as the correct high-performance unsupervised learning algorithms are applied.

The final problem is with 5G. LTE, already requires high bandwidth links:  for example, 10 MHz of spectrum to a 2×2 MIMO antenna requires 1.2 Gbps.   In 5G – and especially in the mmWave range – the CPRI data streams are exploding in size and the network will soon run out of fibre.

Luckily, eCPRI (enhanced) was introduced to address these problems – and it turns out it can also be used to detect and predict network issues, as we discovered when we applied high-performance, unsupervised learning to huge volumes of power measurements extracted from the interface.

All this data came from the Encqor 5G testbed network; a state-of-the-art 5G platform deployed in five cities in Canada using Mid-Band (N78 / 3.5GHz) and experimental mmWave (N261 / 28 GHz) spectrum.

Over 1 Gbps of data was filtered and distributed into 1000 payloads of over 98 000 datapoints – at a rate of 1 payload per milliseconds (ms).

The CPRI data was flying at over one Gbps per second and to be able to predict anomalies in the network we needed to extract, process, and analyze it in under 10 ms.  As a basis for comparison, that’s over 20 times faster than the average person’s reaction time. (Want proof?  check out this little test https://humanbenchmark.com/tests/reactiontime).

The useful power measurements we needed to extract was in the IQ portion of the interface (In-Phase and Quadrature signal samples).  With some bitshifting we were able to filter out and discard over 850 Mbps per second of data and distribute what was left into 1000 payloads of over 98 000 datapoints each – at 1 payload per ms.

The fun part came once the raw data was in the payloads.  It involved converting the datapoints into complex numbers, reshaping the data, and performing a Fourier transform on it to convert it from the time to the frequency domains.  



Fig. 1, preprocessed IQ Data, where the x-axis represents the frequency and y-axis represents the normalized power measurement. This graph was obtained after using the Autoencoder to select the anomaly candidates. Blue dots represent normal data and red dots represent anomaly candidates.


Detecting which points were truly anomalies was a challenge because we did not have any labels or classifications. Hence, we needed to take an unsupervised learning approach that allowed for high performance and parallel processing. We settled on a Deep Neural Network model, using autoencoders on data that had been brought to a lower dimension. By itself, it was capturing too many points, so we then used binning and thresholding to improve the output as shown in the figures below.



Fig. 2, On the left we have the histogram of anomalies, where every 32 sub-frequencies are grouped up together, and a threshold is indicated with a dashed red line. Bins that have a value greater than the threshold value are then considered as anomalous sub-frequencies. On the right hand side we have the original data with bins 


This model allowed us to detect anomalous power measurements and associate them with the specific antenna frequencies that were causing the problems. Because we can process the data so quickly (<10ms), problems that would normally go undetected can be identified and alerts or corrective action can be taken before any customers or industrial applications are affected.


Fig. Favoring reactiveness vs. accuracy in ML-based anomaly detection


Our general approach has been to favour reactiveness over accuracy. This generates warnings immediately as issues occur thereby keeping 5G applications fully secured. Warning (or alerts) can be used to instruct the 5G application to revert into “Safe Mode”. This is especially important in transport (e.g., robotaxis), manufacturing (teleoperation of remote equipment), and other industrial use cases. Furthermore, a network orchestrator could easily subscribe to these warnings, and after receiving a high volume of them, send self-healing commands to the affected 5G RAN sites.


It turns out that both the CPRI and eCPRI interfaces have a wealth of information in there – it’s just a matter of having the high-performance AI-based algorithms to extract and process their hidden secrets.


15 views0 comments

Comments


bottom of page