So far in this series I have attempted to steer you away from trying to locate sources of interference and provided a framework for troubleshooting to test for and isolate problems stemming from noise or interference. In the next few articles I want to demonstrate examples of how this procedure can be put to use. So for the first example, allow me to spin a terrifying yarn… a true story of a WISP that experiences an unexplained sector outage at the start of a holiday weekend.
To review, the term ‘noise floor’ is defined in Wikipedia as:
“The measure of the signal created from the sum of all the noise sources and unwanted signals within a measurement system, where noise is defined as any signal other than the one being monitored.”
To paraphrase, the noise floor is the murky, nasty stuff you want your operating frequencies to stay well above. Ideally, you want the noise floor to hover around -100 dBm or lower. Always. The greater the noise floor, the greater the level your legitimate operating frequencies need to be to guarantee optimal performance of your links. So what happens when the level of the noise floor changes to your network’s detriment?
Observe the spectrum analysis below. It is a snapshot from a dynamic baseline of an access point operating at 924.8 MHz:
As you can see, this sector was already experiencing less-than-ideal noise levels, mainly due to antenna height (70 meters) and its directionality. However, almost all customers were close enough to the tower that the receive signal strength (RSS) on both sides was well above the manufacturer’s recommended level for noise tolerance (20 dBm). Even with fluctuating interference (the red line) up to -65 dBm, average throughput was consistently tested and verified at the promised rates offered by the WISP’s service agreement.
Now for the scary stuff.last article.
Scale: Single 90º sector at 924.8 MHz affecting ~20 customers within 2 km of 70m 4-sector tower.
Internal factors: No known changes or additions to the WISP customer’s wireless network.
External factors: No known tower (or other) construction projects in customer’s coverage area.
Timing: Interestingly enough but ultimately unconstructive, the abrupt noise level increase was recorded shortly after midnight on Canada Day (July 1).
|1||Baseline analysis||20 dBm increase in noise floor in upper 900 MHz band|
|2||Configuration changes||No changes to hardware or software within the 24 hr period in which problem arose|
|3||Customer feedback||Unnecessary; effects to affected customers already known (unable to connect)|
|4||Radio configuration||Configuration correct|
|5||Radio operation and monitoring||AP radio nominal operation; monitoring shows RSS flatline for affected CPEs, SNR levels through the roof|
|6||Hardware inspection and component swapping||All hardware visually inspected and components swapped with spares; no effect|
|7||Parallel system testing||Spare radio with panel antenna attached detected similar noise levels at original antenna height (70m). Noise floor levels decreased slightly at each 10m step down to 30m.|
CONCLUSION AND SOLUTION
This real-life example describes every wireless operator’s worst nightmare in all-too-vivid detail. And as nightmares usually go, deploying the solution was no easier than admitting the cause of the problem. Given the affected sector’s height and the fact it was pointing into a densely populated area, it was concluded that the noise floor was caused by a newly installed, unknown system transmitting from somewhere in an approximate 250 km2 area south of the tower. As such, the sector’s channel was retired and the remaining three sectors were rotated to compensate. Service to 98% of customers was restored with similar or better performance.
In part four I will use the same template to illustrate how to detect, identify and solve interference issues commonly found local to customer installations.
Author Profile: Tim Preston is a Senior Network and Systems Analyst with experience dating back to 1998. He started on the front lines of technical support for a large northern Ontario Internet service provider while earning his diploma in Computer Programming and Network Analysis. After being hired by a major wireless broadband radio manufacturer, Tim moved to Toronto in 2001. In 2009 Tim started Haven IT Consulting. Examples of work he has done for his clients include providing management and troubleshooting services to wireless ISPs, interconnecting retail outlets for an equipment supplier, and providing technical auditing, network design, operations advice, and technical support for various local businesses and network solutions providers.