The Hidden Cost of AI in Town Halls: Efficiency versus Authenticity

Sean Cho
April 1, 2025

Table of Contents
AI is transforming employee engagement in town hall meetings by streamlining polls, responses, and feedback summaries. With AI tools for Q&A, polls, and analytics, companies save time and gain instant insights. But here’s the catch: while AI boosts efficiency, it can reshape or dilute employee voices, altering the message we intend to convey.
When employees ask tough questions or raise concerns in town halls and polls, those messages should reach leadership exactly as they were intended. But what happens when AI offers to 'clean up' or improve those questions? The words might be more polished, but does that mean they're more honest?
For HR and corporate communications teams, the challenge is clear: how do we leverage AI to enhance engagement in town hall meetings without losing the authenticity that makes it meaningful?
AI and employee input: Be careful of the slippery slope
AI is fast shaping workplace discussions, from generating poll questions to writing Q&A responses. Some tools are adding AI features to help employees refine, shorten, or even change the tone of their questions before submitting them.
That sounds helpful, right? And it can be—especially for employees who feel unsure about how to phrase things. But here’s the catch: if AI tweaks a question to read “better” or look more polished in a corporate setting, does it still carry the same weight?
Let's take a simple example. An employee submits a question like:
"Why are we expected to respond to emails after hours?"
And an AI-powered tool might helpfully suggest a rephrased version:
"What are the expectations for replying to emails after hours?"
The question is now clearer and more neutral—but the original frustration, urgency, and core issue are lost. And with AI edits becoming routine, a domino effect emerges: Can employees trust the questions their colleagues ask? Can HR and comms teams trust the sentiment behind them? Can leadership rely on the analysis they receive?
While AI models include filters, they’re not foolproof—they may still produce biased or misleading responses. And with AI making it easy to click and refine a question’s tone or structure, employees might submit without giving much thought, unintentionally changing the intent of their message.

AI-generated polls: Are we asking the right questions?
AI isn’t just helping us analyse employee input—it’s also starting to shape the questions we ask in town halls as well.
Tools that generate poll and survey questions can be a huge time-saver. But the way a question is framed can still steer responses. Even before AI, manually written questions were at risk of bias. AI makes it easier to write polls at scale, but it can also amplify those same risks.
Example:
- Neutral: "How do you feel about the company's new remote work policy?"
- AI-Optimised: "How has the new remote work policy improved your work-life balance?"
The second question assumes a positive impact before employees even respond. It nudges them toward a particular answer instead of capturing their real experience.
And here’s the kicker: Half of employees worry that AI-generated content might be inaccurate or misleading. If leadership is making decisions based on these responses, they may be reacting to a shaped narrative—not real feedback.
Source: McKinsey & Company
That said, with human review and thoughtful framing, AI-generated polls can support town hall engagement. The key is to double-check for unintended bias and ensure the questions are asking what we really want to know.
Try Pigeonhole Live for free to run authentic town halls that keep your employees' intent front and centre.
AI translation: Bridging gaps or losing meaning?
AI translation is opening doors like never before. Global employees who once struggled with language barriers can now join Q&As, see live captioning, and hear speech translation during town halls. That’s a big win for inclusivity.
But there’s still a challenge: translation is not just about converting words—it’s about conveying tone, meaning, and cultural nuance.
For example, in Japanese, the phrase: "心の風邪を引きました" will be translated by AI as: "I caught a cold of the heart." Try it for yourself in a translation app!
But in context, the phrase is often used to describe depression, not a literal cold. A Japanese speaker will understand this as referring to mental health, while AI might take it at face value.
That’s why it helps to pair AI translation with native speakers who understand the language, tone, and cultural context. If a foreign CEO visits a local office, for example, a bilingual team member can help review feedback in a way that truly reflects what employees are trying to say.
When used with care, AI translation helps bring people into the conversation—but we need to make sure we’re truly hearing what’s being said.
AI summarisation: Losing the full story?
AI is great at compiling large amounts of data and surfacing trends. After a large town hall meeting, AI can summarise hundreds and thousands of comments in minutes—a task that would take teams days or weeks.
But that convenience has a cost.
Let’s say:
- Employees express frustration over “unrealistic quarterly targets”.
- AI might summarise that as: “Concerns about quarterly targets”.
That summary isn’t wrong—but it’s been neutralised. The urgency around burnout has been softened into a generic business talking point.
And here’s another risk: AI might shape both input and output. If employees use AI to refine their questions, the sentiment might already be softened. Then AI summarises that softened input, further shifting the message.
By the time the summary reaches leadership, it may barely reflect what employees originally meant—like a game of Telephone, where each retelling changes the story!
What about AI summarising translated Q&A? To keep things accurate and true to what was originally said, insights should come straight from the original language before translation. That way, nothing gets lost or reshaped by AI along the way.
To keep insights accurate, summarisation should be treated as a first draft. The time saved on manually compiling insights should be used where it matters most—carefully reviewing them to ensure key concerns aren’t lost.
As communications experts put it: “We maintain authenticity by taking information created by AI, assessing it, adapting it, and then finally articulating it. The last three steps must be done by a human.”
Source: Public Relations Society of America Inc.
AI isn’t here to replace human judgement (yet)—it’s here to support it. Used well, AI can unlock efficiency, bridge language barriers, and help surface key themes from large volumes of feedback.
But it should never come at the cost of an authentic employee voice in town halls.
So how do we make it work?
- Use AI to reduce friction, not meaning. Let it translate, summarise, and support—not speak for employees.
- Keep team members in the loop. Employees and leadership need confidence that the insights are real—not just algorithmically generated.
- Watch for unintentional bias. AI-generated polls and summaries should be reviewed to ensure they reflect what employees really think and feel.
Final thought: AI should illuminate, not obscure
AI can and should be used to achieve efficiency in engaging employees during town halls. But let’s not lose the authenticity in the process. When used with care and intent, AI can help organisations hear more voices—more clearly and more fairly.
Choose AI tools that amplify employee voices in town hall meetings, not alter them. Engagement is about listening—so let’s make sure we’re truly hearing what’s being said.
Choose tools that prioritise clarity and authenticity—try Pigeonhole Live and empower your employees to be heard just as they intended.