Skip to main content

Lessons from the Past: How Employees Perceived Change

Formal overview of how change practitioners assess employee perceptions of past organisational change via document review interviews, surveys and usage indicators; includes rapid-estimate techniques for tight timelines and notes major pitfalls and errors.

Updated this week

Employees’ expectations of a new initiative are shaped by the organisation’s history of change: whether earlier programmes delivered promised benefits, how disruption was managed, and whether leaders were perceived as credible. Empirical reviews of change recipients’ reactions show that such perceptions influence readiness, engagement and resistance. Oreg, Vakola and Armenakis (2011)

A related risk is cynicism about organisational change, which can form when repeated efforts are judged to have failed. Reichers, Wanous and Austin (1997)

Assessment methods used by change practitioners

A practitioner typically develops a perception profile by triangulating (1) evidence of what happened, (2) how people remember it, and (3) what they believe will happen again. Common methods include:

  • Document and metrics scan: Review post-implementation reviews, engagement survey comments, change saturation logs, adoption/usage data, training completion, support tickets and turnover/absence patterns around prior initiatives; look for “promise vs. experience” gaps.

  • Targeted interviews and focus groups: Use semi-structured prompts that anchor on specific prior changes (e.g., “the last system rollout”) to elicit concrete examples, perceived trade-offs and trust cues; compare themes across levels, functions and sites.

  • Structured perception probing: Frame questions around message elements such as discrepancy, appropriateness, efficacy, principal support and personal valence, which are discussed by Armenakis and Harris (2002); this supports consistent coding of interview notes and quick survey items.

    Example: if a prior CRM programme achieved high training completion but low day-to-day usage, and employees describe “extra admin with no customer benefit”, the perception problem is likely credibility and workload, not awareness. This finding typically shifts the next initiative toward visible sponsorship, capacity management and early “proof points”.

Rapid estimation when time is short

When discovery time is limited, practitioners often publish a bounded estimate (with a stated confidence level) rather than an exhaustive diagnostic. A common rapid approach is:

  1. Micro-interviews: 8–12 short conversations (5–10 minutes) spanning levels and functions, including known skeptics and informal influencers; capture representative phrases.

  2. Five-item pulse: a same-day survey on perceived delivery, trust in leaders, capacity for change, confidence in support, and expected personal impact; segment by team/site if feasible.

  3. Triangulation: reconcile interview themes with readily available adoption/support data; publish a one-paragraph perception statement plus “what would change this view” actions.

A facilitated debrief (e.g., an after-action review) can substitute for missing records when the history of a past programme is contested.

Common pitfalls and errors

  • Sampling only “supporters” and missing local subcultures or frontline workarounds.

  • Conflating general morale with change-specific sentiment (misattribution).

  • Asking leading questions that encourage blame rather than observable impacts.

  • Failing to protect confidentiality, suppressing candour and reinforcing low trust.

  • Overstating certainty from rapid data; omitting confidence levels and known unknowns.

  • Treating “communication” as the root cause without checking process, tools and workload.

References

Did this answer your question?