Skip navigation EPAM
CONTACT US

Balancing Innovation & Regulation: AI Governance as the Key to TME's Transformation

Balancing Innovation & Regulation: AI Governance as the Key to TME's Transformation

Artificial Intelligence (AI), machine learning, large language models (LLMs) – these three powerful tools and technologies are revolutionizing businesses, particularly in the telecom, media and entertainment (TME) industry. As we’ve seen over the last year in particular, these technologies have advanced efficiency and created unprecedented opportunities in today’s markets. From data-informed decision making to cost optimization strategies, the benefits of AI in the TME industry are well documented – it’s clear that AI and LLMs have become a crucial part of any successful business.

Yet, this technological revolution is met with consumer resistance and uncertainty about its impact on our daily lives, paving the way for the introduction of new rules and regulations. To address these challenges and increase confidence in AI, organizations, academic institutions and governmental bodies worldwide are actively seeking solutions to mitigate the complexities introduced by AI adoption. Consequently, a wave of new legislation has emerged across various regions, with Europe often at the forefront of regulatory measures. These legislations are poised to significantly impact a range of industries, including TME, by setting precedents for AI governance and use.

These advancements in both the technological and the regulatory landscape beg the question: How can TME businesses balance myriad opportunities AI opens up with compliance and regulation adherence?

Below, we look at one key regulation affecting TME companies now as well as provide a framework for balancing the opportunities and challenges that AI presents.

Adhering to the Digital Services Act: Impacts & Implications

One of the primary regulations that will significantly impact TME is the Digital Services Act (DSA), which mandates transparency, accountability and thorough content regulation. Addressing the threats of AI-generated misinformation like deepfakes – highlighted by the World Economic Forum Global Risks Perception Survey 2023-2024 as a major global risk – the DSA has been in effect since November 2022. Applicable for EU-wide compliance as of February 2024, it obliges TME companies to adopt new content moderation protocols and to disclose their data and network management strategies.

Under the DSA, entertainment platforms must not only address the creative applications of deepfakes but also mitigate associated risks such as misinformation and copyright infringement. This necessitates a comprehensive content moderation framework and robust authorization processes for technological application. Similarly, telecom companies must increase transparency in network management, data handling and security measures, also extending to harmful content moderation.

Social media firms are required to tackle the threats of fake accounts and report on actions taken; however, there is growing concern over the effectiveness of their current reporting standards. The DSA emphasizes improvement in both the quality of reports and the measures employed to manage risks.

The DSA encourages platforms to implement measures to combat deepfakes, preferring robust authorization processes over deepfake detection systems, which can be resource-intensive and technologically challenging. Additionally, European law faces the challenge of keeping up with technological advancements and crime evolution, complicating the establishment of new regulatory frameworks around AI-powered deepfake technology.

The proposed AI regulatory framework within the DSA demands that deepfake content be identifiable as such, confronting the industry with the task of accurately detecting and labeling sophisticated AI technologies and deepfake content.

So, while the EU's Digital Services Act is a significant step towards addressing the challenges posed by deepfakes and misinformation, it is clear that a multi-faceted approach is needed. This includes not only regulatory measures but also technological advancements, industry cooperation and public awareness.

As the digital landscape continues to evolve, companies must make significant efforts in addition to regulatory compliance in order to ensure the integrity and safety of their platforms and the continued trust and loyalty of their consumers.

Governance as an Answer

As we have already seen, AI is a sophisticated technology providing numerous benefits to industries and consumers alike; however, as with any technological revolution, it brings new complications and concerns, requiring thorough consideration and conscientious technological governance.

If TME companies are to address consumer concerns in the age of AI while navigating the evolving regulatory landscape, placing AI governance at the center of one’s pursuits may provide a harmonized path forward.

AI governance involves a range of tasks designed to ensure the ethical, responsible and legal use of AI. Below is a framework that provides an initial outline for managing AI efforts with governance at its core:

Conclusion

As AI technology becomes increasingly embedded in industries like TME, its myriad benefits are tempered by pressing concerns over data privacy, misinformation and deepfakes. The introduction of the DSA marks a decisive step towards robust regulation, demanding greater transparency and accountability from companies leveraging AI technologies.

By prioritizing AI governance, TME companies can reconcile consumer concerns with innovation while simultaneously advancing safe, ethical AI applications and remaining compliant with emerging regulations.

Moving forward, holistic approaches that integrate cutting-edge technology, cross-industry collaboration and public education will be critical in fostering a digital ecosystem where the advantages of AI are ethically, responsibly and legally realized.

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?