The combination of digital media, internet and connected devices has created an irreversible one-to-one relationship between the consumer and content. It has changed the way that people watch TV, perhaps more than any other technical development since the dawn of the medium. As an extension of this seachange, the amount of media produced and consumed today is far greater than at any time in television history. Unlike traditional TV platforms, OTT offers a medium for all of this content, both short-form and long-form. While Netflix and Hulu hold closer to the traditional long-form model of TV content, the aforementioned one-to-one relationship really centers around shorter form content viewed on mobile devices. With OTT, there’s an entire new universe for broadcasters to explore – and an inviting outlet for content producers that are new to the game. However, with the production and delivery of all this content comes an important responsibility and even an expectation for professionals to deliver high-quality content. The very highdensity, geographically-dispersed OTT landscape means staying on top of a significantly larger number of signals and locations than what terrestrial, cable and satellite systems typically deliver. How does the OTT service provider effectively monitor, analyze and troubleshoot all of these streams from headend to the delivery point – with the
understanding of the greater challenge that comes with quality assurance out to the last mile?
The legacy approach to quality-of service (QoS, for the service provider’s infrastructure) and quality of experience (QoE, as in what the consumer receives) is beginning to fade. In the world of digital TV, the multitude of streams – and amount of data that comes with them – makes point-based monitoring with purpose-built components a costly, time-consuming and highly complex endeavor. This is amplified in the OTT space, where such an endeavor would be near impossible to undertake given the high-density and geographically dispersed 1-1 nature of the beast. Cloud-based monitoring aims to solve these problems, and has quickly evolved over the past 24 months as more suppliers bring viable solutions to market. Setting up, and later scaling an architecture, is as simple as understanding the components, where to deploy, and how to deploy – and tying this into the existing IT backbone. The general pieces of a Qligent Vision system, for example, require the following:
• A robust network with a minimum of 64kb/s of bandwidth
• The cloud, common, off-the-shelf servers, VM’s, or similar platforms for content aggregation
• Networked and/or virtualized probes (or IoT-enabled end devices) that are remotely software-definable, and globally deployable – all communicating performance data to the central server(s)
• Browser-based devices for operators and engineers to view and analyze all performance data populated through a software program
Most OTT service providers have to date integrated monitoring capabilities with a partnering CDN. While effective to a degree, this approach does not provide the broader scope of delivery nor
the feedback loop around QoS and, more important with OTT, QoE. The intimacy that comes with this burgeoning one-to-one relationship we see between consumers and their mobile devices —
service providers get one shot at first impressions, and they have plenty of competition. Without monitoring and analytics in the delivery and viewer domain, these service providers have little-to-no real
insight into the consumer’s QoE – and therefore, have no way to ascertain if they are losing viewers, and how to reverse such trends if so. If the service provider fails to deliver on the quality, viewers will
turn to a competing service.Monitoring and analysis in the cloud, whether handled on-premises or outsourced to a managed service layer focused strictly on those tasks, is the most effective way to minimize churn – and earn repeat viewers.
By moving to a cloud monitoring and analysis system, OTT service providers can locate and connect probes that monitor all elements of OTT streaming performance. These remote deployed devices measure how well the monitored streams are being received across the distribution and delivery chain, and automatically return that data to a centralized server for cross-correlation and analysis. This means staying on top of service availability issues as they occur via system alerts; or recognition of concerning performance trends through detailed data analysis. An intelligent – and popular –
entry point for many OTT service providers, when it comes to cloud-based monitoring, falls in the compliance category. In a cloud architecture, a software-defined compliance solution is ideal to record content across as many locations as possible. This content can be reviewed and analyzed back at the studio, home or from any location with network connectivity. The number of monitor points quickly
escalate upon approaching the last mile. Just as with any media delivery platform, the last mile is the most challenging to intelligently monitor, analyze and understand. At this stage, deployment of smaller “micro-probes” based on internet of things (IoT) devices – also networked to the central aggregation servers – will provide insight into the usual last mile concerns. Some key technical benefits of using a reliable cloud monitoring system in any media distribution operation include:
• Full HLS and DASH-IP layer analysis for QoS
• Raw packet capture for deep inspection of packets
• Full transport stream analysis
• Video analysis including highresolution formats, such as 4K - now proliferating in many OTT consumer services – for QoE analysis
• Analysis of audio layer performance including compliance, quality and language tracks
• Root cause analysis for rapid response of delivery issues
While the rollout of cloud-based monitoring system is fairly simple for an engineer or systems integrator with IT knowledge, these responsibilities can be minimized as more purely virtualized cloud solutions surface. For example, Qligent’s Vision-OTT platform is 100% virtualized to eliminate the entire process from hardware procurement and software installation to support and future upgrades.
By using services such as AWS, Azure or Rackspace, OTT service providers can not only leverage that service to deploy, host and manage the monitoring architecture – they also benefit from the
highly reliable uptime and redundancy. These services are robust and less prone to failures than a typical IT network in a TV station or video headend, for example.
Another attractive option for many OTT service providers is to offload the actual monitoring, analysis and troubleshooting responsibilities to a managed service provider. These special services can provide continuous offsite monitoring, event-based troubleshooting, incident-based and/or periodic analysis, comprehensive reporting, and recommendations to improve and scale services as warranted. Removing the burden from OTT service providers – particularly those with modest internal resources – can more effectively optimize very widely dispersed OTT distribution and delivery systems that cross borders, continents and oceans. As more OTT services proliferate, adoption of these cloud-based philosophies and workflows will surely benefit the entire chain from production and processing; to delivery and consumption – with the scalable and versatile toolset needed to really penetrate the enormous signal density of the OTT architecture.