Sponsored Post
The rapid advancements in generative AI (GenAI) have captured the attention and imagination of industries worldwide, and our telecommunications sector isn’t immune. As telcos grapple with GenAI’s promise and potential, it’s crucial to separate hype from reality and chart an appropriate strategic course.
During our research and advisory work, we’ve had conversations with telecom operators worldwide, independent software vendors, global system integrators, hyperscalers, and GenAI solution providers. We’ve realized that there are common misconceptions surrounding GenAI at telcos. This article will explore the top five misconceptions and suggest best practices for adopting and leveraging transformative technology.
Misconception 1: Telcos need to pre-train their large language model (LLM)
With headlines of major Tier-1 telcos investing or partnering with frontier model builders and a few telcos proudly announcing they have trained their own “telco LLM,” many operators mistakenly believe that they must invest in developing and training their own massive LLM from scratch to harness the power of GenAI. However, this daunting undertaking requires immense computational resources, datasets and expertise. While a select few telcos may have the scale and capabilities to pursue this path, for most, partnering with established frontier model builders, leveraging open-source models, and focusing on domain-specific adaptation (including fine-tuning) is a more feasible approach.
Misconception 2: Fine-tuning pre-trained foundation models with telco data is a simple, sufficient solution
While fine-tuning pre-trained LLMs (private or public) or multi-modal foundation models (FMs) with telco-specific data is orders of magnitude cheaper and more accessible than creating a model from scratch, achieving adequate model performance may not be straightforward. Effective fine-tuning is still a complex process that requires picking the right base FM, carefully curating and formatting datasets, and a strong understanding of fine-tuning processes. Moreover, fine-tuning alone may not suffice; techniques like retrieval-augmented generation (RAG) and prompt engineering are often necessary to further inject telco domain knowledge, reduce hallucinations, and handle real-time or constantly changing telco data. All these point to the importance of having a data modernization strategy that facilitates making telco data available for fine-tuning models and supporting AI and GenAI initiatives.
Misconception 3: A single foundation model can meet all of a telco’s GenAI needs
The notion that a single, monolithic foundation model can cater to the diverse GenAI requirements of a telco is another misconception. While frontier model from builders like OpenAI and Anthropic are leading performance benchmarks across a wide variety of tasks, there is a healthy ecosystem of increasingly competitive open-source LLMs available on sites like Hugging Face. Given the rapid pace of GenAI development and the benefits of specialized models, telcos will likely employ a portfolio of LLM/FMs tailored for specific use cases and to achieve targeted cost constraints. Other industries find that smaller models, especially fine-tuned, can be a good fit for domain-specific tasks. These smaller models are more compute, memory, and energy efficient and are more responsive (lower latency). Embracing a modular and adaptable approach, allowing for a diverse model catalog, is vital in navigating this evolving landscape.
Misconception 4: GenAI models must be private and on-premises to be secure and compliant
We hear telcos speak of sovereign AI, the need to privately host, and the importance of on-premises model-serving to ensure data security and compliance. While there are situations that demand on-premises hosting of models for training, fine-tuning, or inferencing against highly sensitive data, telcos should carefully evaluate before discounting public clouds or public GenAI services as options.
Many telcos already rely on public cloud infrastructure for critical workloads — many have their BSS and OSS systems running on public clouds. Yet others depend on SaaS solutions for customer relationship management, self-service portals, and customer knowledge base systems, many of which run on top of public clouds and hold critical business and customer PII data. With the proper governance measures and trusted partners, telcos can securely utilize public foundation models and cloud-based GenAI services, adhering to the same rigorous standards applied to other telco data.
Misconception 5: GenAI will displace traditional predictive AI techniques
Lastly, there is a perception that GenAI will render traditional predictive AI techniques obsolete. Telcos need to remember they already employ traditional machine and deep learning in their networks. Many of today’s sentiment analysis or next-best action solutions for telco CRM systems already deploy machine learning/deep learning solutions. Network optimization in both wireless and wireline networks likewise employs predictive AI techniques — convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other techniques have been successful.
Telcos should view GenAI as a complement to existing AI approaches rather than a replacement. Traditional techniques will continue to play a valuable role, particularly in scenarios where well-defined problem scopes and structured data are prevalent.
Best Practices and Recommendations
While there are numerous other misconceptions, the above are the five we commonly run into. And since we’re sharing our observations, we’ll point out best practices we’ve observed. To improve the odds of succeeding with GenAI, we recommend that telcos do the following:
- Leverage external GenAI expertise when appropriate: Don’t hesitate to tap into the knowledge of foundation model providers, hyperscalers like AWS, Microsoft Azure, and Google Cloud Platform, as well as other industry experts to accelerate your GenAI journey.
- Understand data sources and work with knowledgeable partners: Gain a deep understanding of your data landscape and engage with your ISVs and GSIs, who likely have a strong knowledge of the data sources that power your OSS and BSS and know how best to extract that data for use in GenAI workflows.
- Focus on building robust data pipelines and processes: Invest (with partners if appropriate) in establishing well-governed data pipelines and processes supporting GenAI workloads effectively. GenAI needs data to be effective, and the key is to ensure you have a flexible pipeline that can provide data in the most appropriate form for consumption by different models or GenAI workflows. For instance, transforming the data for RAG versus fine-tuning will require different workflows. Similarly, inserting reinforcement learning (from human feedback or other techniques) into the flow requires reshaping the data path to accommodate that feedback.
- Avoid locking into a single model; prioritize flexibility: Embrace a model-agnostic approach, allowing for experimentation and iteration as the GenAI ecosystem evolves. Telco ISVs and even hyperscaler partners have built flexible workflows that can accommodate both proprietary partner FMs as well as open-source FMs. For example, Netcracker, in their GenAI Telco Solution, can accommodate a range of different model providers and integrate with multiple hyperscale GenAI platforms. Telcos should recognize that the number of feasible foundation models will grow: OpenAI, Anthropic, Cohere, Mistral, Meta Llama 2, Databricks DBRX, and other closed and open models will continue to proliferate.
- Invest incrementally, learn continuously, and proceed judiciously: Encourage your employees and partners to adopt an incremental investment strategy, emphasizing continuous learning and judicious implementation to minimize risk and maximize value capture. No telco can sit back and wait for the GenAI space to gel sufficiently before engaging. It’s important to get engaged early but not over-rotate into GenAI investments and maintain a diverse portfolio of technology investments across AI techniques and general digital transformation (virtualization, cloudification, data modernization). An example of an operator already on their GenAI journey is T-Mobile US, which is incrementally working on bringing GenAI into customer care and partner onboarding as part of their wholesale business, and learning while implementing.
- Hold GenAI to task: Remember that GenAI adoption comes with potential risks and challenges. Ensure that your GenAI teams acknowledge issues such as model bias, lack of explainability, hallucinations, IP and copyright infringement, hallucinations, and the need for human oversight and address these in any solutions they build.
Amidst these considerations, it’s worth highlighting the importance of establishing an intermediary GenAI platform. Such a platform can provide a secure abstraction layer between telco data and foundation models, enabling data isolation, governance, and model interchangeability. Moreover, it can facilitate the integration of crucial GenAI components like RAG and knowledge graphs, empowering telcos to harness the full potential of GenAI without undue complexity or vendor lock-in. That same model can likewise implement controls for compliance, security, and other guardrails. This is an approach that we’re seeing from leading hyperscalers, data platforms like Databricks and Snowflake, and telco ISVs like Netcracker. Our previous article on the Netcracker GenAI Telco Platform provides more details on their multi-model and flexible pipeline approach.
GenAI presents a transformative opportunity for telcos to redefine customer experiences, streamline operations, and unlock new revenue streams. By dispelling common misconceptions and adopting a strategic, informed approach to GenAI adoption, telcos can position themselves at the forefront of this exciting frontier. The journey ahead may be complex, but with the right mindset, partnerships, and technological foundations, telcos can chart a path to success in this new age of GenAI.
Note: Sponsors do not have any editorial control over article content, and the views represented herein are solely those of AvidThink LLC.