AI Adoption in Media: Navigating the Technical Challenges

The Reality of AI’s Readiness

AI technology, while promising, is not yet fully prepared for production-level deployment in media, particularly in trusted journalism where accuracy is critical. Jonathan Rivers notes, “The technology isn’t ready yet. It is bleeding edge precursor technology, and a lot of it just isn’t ready for real production prime time.” A significant issue is hallucinations, instances where AI generates plausible but incorrect information. 

For media outlets, even a 1% error rate is unacceptable, as Jonathan emphasizes: “If it is 1% wrong, it is 100% wrong.” This necessitates extensive fact-checking, which can offset AI’s efficiency gains.

Solution: Use AI for lower-risk tasks, such as drafting content or summarizing internal documents, followed by human verification. Robust oversight and fact-checking protocols are essential to maintain credibility.

Data Quality and Governance

AI relies heavily on high-quality data, but media companies often grapple with legacy content that is unstructured or inconsistent. Rivers highlights, “No matter how clean you think your data are, they are dirtier than you think.” 

Challenges include inconsistent naming (e.g., “Hewlett Packard” vs. “HP”) and outdated language in archives, which complicates AI training and application. Preparing this data requires significant effort to align it with modern standards.

Solution: The article recommends establishing data governance frameworks to standardize and clean archives. This involves creating a semantic layer to manage synonyms and evolving terminology. AI can assist by automating categorization and flagging inconsistencies, but an initial investment in data preparation is crucial. Rivers advises focusing on “cleaning my data” as a competitive edge.

Model Reliability

Ensuring consistent and trustworthy AI outputs is a major hurdle, especially when scaling large language models (LLMs). Cornel Stefanache explains, “The biggest problem is that in the 10 experiments that you’re doing, you might get one answer wrong. But that represents 10%.” For media applications, this inconsistency can undermine trust. Current LLMs require techniques like chain-of-thought prompting to improve reliability, but gaps remain.

Solution: The article advocates adopting best practices such as chain-of-thought or tree-of-thought prompting to guide models toward accurate outputs. Partnering with AI vendors, like AscentCore, can provide pre-built tools with these practices embedded, enabling rapid deployment. Stefanache notes, “Our tools are adopting the best practices in LLM interaction… deployed the next day in your infrastructure.”

Model Reliability

Ensuring consistent and trustworthy AI outputs is a major hurdle, especially when scaling large language models (LLMs). Cornel Stefanache explains, “The biggest problem is that in the 10 experiments that you’re doing, you might get one answer wrong. But that represents 10%.” For media applications, this inconsistency can undermine trust. Current LLMs require techniques like chain-of-thought prompting to improve reliability, but gaps remain.

Solution: The article advocates adopting best practices such as chain-of-thought or tree-of-thought prompting to guide models toward accurate outputs. Partnering with AI vendors, like AscentCore, can provide pre-built tools with these practices embedded, enabling rapid deployment. Stefanache notes, “Our tools are adopting the best practices in LLM interaction… deployed the next day in your infrastructure.”

From Pilots to Production

Transitioning AI from experimental pilots to full-scale production involves both technical and business considerations. Stefanache points out, “Moving from pilot to production-ready is not necessarily a software development issue; it’s more of a business value issue.” High computational costs demand a clear return on investment (ROI). Rivers cautions against training custom LLMs, stating, “I would probably never try and train a custom LLM again… the lift just doesn’t create the yield.”

Solution: The article recommends starting with rapid, low-cost pilots to test AI applications, such as automated tagging or search enhancement. By evaluating ROI and output quality, companies can decide which tools to scale. Leveraging existing models and vendor partnerships minimizes technical overhead and accelerates deployment.

Ready to see AI in action?

Visit our dedicated Media page and discover how AI can maximize your media revenue through AI and our core expertise.

MORE TO EXPLORE

AscentCore Labs

Live Event Video Recap: Overcoming Data Bias & Increasing Model Fairness Using Counterfactuals

For businesses across every industry, the customer experience is more important than ever. In a time when a brand’s success is often closely tied to customer loyalty, finding ways to improve and optimize customer service is critical. Keep reading for five AI-powered solutions that can help businesses transform and enhance their customer experience.

AscentCore Labs

Using Counterfactuals To Overcome Data Bias & Increase Model Fairness

For businesses across every industry, the customer experience is more important than ever. In a time when a brand’s success is often closely tied to customer loyalty, finding ways to improve and optimize customer service is critical. Keep reading for five AI-powered solutions that can help businesses transform and enhance their customer experience.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.