Local Media Association: AI in 2026 — how newsrooms can get more value without losing trust
Ethan Holland, VP of Digital Media at Draper Digital Media, manages technology, revenue, and content for five broadcast outlets and five radio stations. In this episode of the Local Media Association’s Keep It Local podcast — published to YouTube in January 2026 — he speaks with host Ryan Welton about what practical AI adoption looks like from inside a multi-outlet regional newsroom.
The conversation is grounded in operating decisions made under real constraints: limited staff, mixed technical ability, and audiences that require trust above all else. Holland is direct about what has and hasn’t worked.
Who it’s for. Working journalists, editors, newsroom managers, and content strategists at mid-size and smaller publications — particularly those who have absorbed a great deal of general AI coverage and are looking for perspective from someone managing the implementation, not evaluating it from outside.
Key takeaways:
-
AI’s primary value is information processing, not writing. Holland’s most pointed argument is that newsrooms keep reaching for AI as a drafting tool, but its actual advantage is converting unstructured material into usable formats: cleaning up transcriptions, structuring data, analyzing images, extracting information from raw documents. Using it to write articles is choosing the feature that requires the most oversight for the least practical gain.
-
Context has replaced prompt engineering. In 2026, the skill that matters is providing AI with detailed, specific context about your situation — your outlet’s audience, your beat, the constraints of a particular story. Elaborate prompting techniques that worked in 2024 have been largely superseded because the models handle ambiguity better. The practical implication: anyone who can clearly describe a problem can now use AI effectively.
-
Accountability follows the byline, not the tool. Holland is explicit that AI cannot redistribute editorial responsibility. If your name is on a story, you own what’s in it, regardless of how it was produced. This shapes how he thinks about where AI belongs in a workflow — tasks where the journalist still makes the editorial call are acceptable; tasks where the AI makes that call are not.
-
Overly restrictive policies are their own risk. Newsrooms that prohibit AI experimentation prevent their staff from developing the practical literacy needed to use it responsibly. Holland argues that informed judgment requires some exposure — journalists who understand what AI can actually do are better positioned to recognize and correct errors than journalists who have never used it.
-
Building tools creates structural advantages. Journalists and editors who learn to use coding assistants to automate repetitive tasks — even simple ones — will have capabilities that colleagues cannot match through prompt use alone. Holland frames this as the next skill gap in journalism, analogous to the early-internet advantage of reporters who learned to build basic web tools.
Worth watching if your newsroom has adopted AI for drafting but hasn’t examined what else it could be doing with the same tools; or if you’re managing a team that has polarized around AI — some using it uncritically, others avoiding it entirely — and need a grounded framework for setting expectations and policies.