RTL Technology Migrating and further developing the existing video-on-demand platform TVNOW
RTL Technology (formerly CBC), the joint production and technology/IT arm of the RTL Germany media group, manages several of the media group’s products, including the channels RTL, VOX, NITRO, n-tv, NOW! and GEO Television. The current joint project with inovex focuses on TVNOW and aims to create a secure and innovative platform to handle upcoming challenges in the TV and entertainment markets. The project’s goals are to move the platform to the cloud, to restructure the topics and teams, and to switch to agile working methods.
Initial situation and target vision
Originally, TVNOW was operated by RTL in its own data centre. The company, however, wanted to make the platform more innovative and resilient and capable of deploying new features more quickly.
The data centre location did not allow elastic scaling, and adding new services invariably involved lengthy coordination and approval processes. Only a few staff members had the requisite knowledge and the clear separation between the development and operations departments made communication difficult.
The new strategy included replatforming using an external cloud partner, AWS, a factor which required additional changes to working processes. The vision was a setup with cross-functional, autonomous teams, each of which could independently advance the development of their own area.
At its core, the “DevOps” concept describes a corporate culture free of hierarchies and silo thinking. Instead, it focuses on effective collaboration between development and operations in order to rapidly implement stable, high-quality software.
These changes also necessitated a rethink of the processes and team structures. The teams’ capabilities were pooled and the platform aligned with them. At the start of the project, technical staff were reassigned to specialist teams such as content, payment, and metadata. The product owners – experts in their respective professional fields – also serve as key liaisons between the teams.
New programming languages were also introduced, such as Kotlin on the backend and TypeScript on the frontend, for example.
The new, sustainable strategy is designed to remain manageable and expandable for years to come.
Team structure and modified work culture
When TVNOW (formerly RTL NOW) was founded twelve years ago, a small team was responsible for developing and operating the platform. In 2018, at the beginning of the replatforming project, the number of team members was scaled up. Today, there are more than ten teams with at least five employees per team. In addition to the teams who maintain the end-customer interfaces (apps and web), there are also domain-specific teams which focus on areas such as data, content, and customers.
The restructuring of the development division into mainly cross-functional teams with dedicated remits has changed the company’s working methods and culture. This has led to a paradigm shift across the entire product. This process, which took about a year, included raising awareness of the new team culture, integrating management, and modifying collaboration practices.
To this end, agile methods were called into service, and implementing scaled scrum across all the teams tremendously increased their adaptability and capacity for innovation.
With inovex we have found an excellent partner to move our system to the cloud and improve it sustainably. With the help of the experts from inovex, we were able to further develop our video-on-demand platform in an innovative and fail-safe manner. At the same time, they supported a change in our team structure that broke up old silos and enabled agile, cross-domain collaboration.
Christian Masjosthusmann
Senior Chapter Lead Native, RTL TechnologyThe underlying technology – the migration strategy
The TVNOW apps (Android, iOS, FireTV, etc.), the system behind them, the web player, and the Smart TV Client place different requirements on the underlying architecture. Among other things, the subscription management and payment functions needed to be re-implemented. At the same time, the system had to be prepared to handle peak loads. It also needed to be able to run smoothly when the development teams were integrating multiple deployments into the production environment on a daily basis. This meant ensuring the resilience of the individual systems and creating backup options which would continue to provide complete functionality to end customers in the event of an internal fault.
All these factors had to be taken into account in the migration strategy.
The first step was to switch to standard components, such as Docker containers and AWS ECS-Fargate Scheduler. This allowed the benefits of the cloud to be leveraged even during the development process and ensured stable parallel operation.
The strategy involves modelling parts of the existing system in separate services, thus dividing up the existing monolith piece by piece. In addition to modelling existing functions, it was essential to ensure that new functions could be developed, thus safeguarding the product’s continuing development. The aim was to ensure that migration and further development can be parallelised as much as possible, rather than being carried out sequentially.
This requires several technical changes: from PHP to Kotlin/Spring Boot, the use of Gitlab CI instead of Jenkins. It also means the development teams switching to managing their own deployments via Terraform instead of deploying new features to environments outside their own areas of responsibility.
Benefits
inovex introduced several best practices which not only improved the stability of the system, but which also improved the cloud migration and payment processes.
The platform now boasts clean interfaces, the individual services are scalable, and new features can be easily developed. The way the teams work has also changed, with incremental methods resulting in faster modifications and results.
For example, heavy load spikes in TV operations, such as those which occur during football matches, can be handled much more effectively. Separating the services and abstracting them into Docker containers allows new technologies and languages to be used.
The teams are now completely responsible for their own services in accordance with the maxim: “You build it, you run it, you love it.”