Skip to main content

Measuring ABSUP: Awareness, Buy-In, Skills, Usage and Proficiency

Measures sequential change-adoption states (ABSUP): Awareness, Buy-In, Skills, Usage and Proficiency. Summarises manifestations, metrics (surveys, observation, analytics) and pitfalls when stages are skipped or confused, causing retraining and resistance.

Updated over 2 weeks ago

Awareness, Buy-In, Skills, Usage and Proficiency (ABSUP) can be used as sequential state variables for describing individual adoption in planned organizational change. Each state is treated as a necessary precondition for the next: people must understand the change (Awareness) before they can credibly commit to it (Buy-In); they must be able to perform it (Skills) before it can occur reliably in daily work (Usage); and sustained, high-quality performance (Proficiency) is typically required for business benefits to materialize. Attempts to skip states—for example, deploying a system before sensemaking and capability building—often creating confusion, retraining demand, and avoidable resistance. The logic is compatible with stage models such as Prosci’s ADKAR [1] and reinforcement-oriented approaches in change leadership frameworks [2].

Concept and measurement approach

Measurement of ABSUP is usually designed as a mixed-method system that combines:

· self-report (surveys),

· qualitative inquiry (interviews or focus groups),

· direct observation,

· objective operational data.

Training evaluation practice similarly distinguishes between learning evidence and behavioral transfer to work [3].

Because ABSUP is sequential, data collection is commonly staged over time: early communications are evaluated for Awareness and Buy-In, training and practice for Skills, go-live for Usage, and stabilization for Proficiency. Results are typically segmented by role and location because adoption is uneven across social systems [5].

Practitioners usually define thresholds for each state (e.g., minimum comprehension score; minimum observed task accuracy; minimum workflow completion rate) and treat the lowest state as the limiting constraint for adoption outcomes. Where usage telemetry is employed, transparency, privacy safeguards, and proportionality are commonly treated as essential.

Operational summary

State

Typical evidence sources

Awareness

Comprehension pulses; Q&A themes; recall of ‘why/what/when’ in interviews.

Buy-In

Commitment and intent-to-use items; participation signals; barrier diagnostics (e.g., COM-B) [7].

Skills

Performance-based assessments; observed practice; work-sample reviews; training transfer checks [3].

Usage

Workflow completion and coverage; audit results; system telemetry with governance [4].

Proficiency

Quality/cycle-time trends; error rework; advanced-case checks; peer calibration [6].

Awareness

Awareness is the extent to which an individual can accurately describe that a change exists, why it is occurring, and what it implies for their role. In field settings, awareness is reflected in consistent recall of the case for change, key dates, and local impacts, rather than mere exposure to communications. Awareness is an explicit first stage in ADKAR [1].

Measurement

· Comprehension pulses: short checks that test recall of the change purpose, scope, and timing (not only whether messages were received).

· Sensemaking interviews: structured prompts coded for accuracy and consistency of the ‘why/what/what changes for me’.

· Question analytics: the ratio of ‘what is this?’ questions to ‘how do I do it?’ questions as an early maturity signal.

Pitfalls and common confusions

· Using distribution metrics (emails sent, meetings held) as a proxy for understanding.

· Treating early confusion as ‘training need’ when the narrative and local impact are still unclear.

· Over-communicating detail before the basics are stable, increasing noise and rumor.

Awareness is often misread as Buy-In when practitioners interpret silence as agreement. It may also be misread as Skills when questions are assumed to indicate inability rather than incomplete sensemaking.

Buy-In

Buy-In is the willingness to support the change and to prioritize the effort needed to adopt it. It typically includes perceived value, fairness, trust in leadership, and intention to use new processes. Acceptance research highlights intention as a precursor to use, influenced by perceived usefulness and ease of use [4].

Measurement

· Commitment and intent items: Likert questions on perceived value, confidence, and intent to adopt; supplemented with open text.

· Participation signals: volunteering for pilots, contributing to co-design, or completing optional clinics.

· Barrier diagnostics: structured exploration of capability, opportunity and motivation constraints (COM-B) [7].

Pitfalls and common confusions

· Equating compliance with buy-in; coercive compliance may not sustain once scrutiny decreases.

· Attributing low buy-in to attitude alone when constraints (time, access, workload) are the binding cause.

· Using a single satisfaction question instead of measuring intention, value, and trust separately.

Buy-In is commonly confused with Awareness (people can understand the rationale but still oppose the change) and with Skills (low confidence can be mistaken for dislike). Distinguishing willingness from capability reduces misdiagnosis.

Skills

Skills capture whether an individual can perform required tasks correctly in realistic conditions. Skills are evidenced by demonstrated performance, not by training attendance. Staged skill development is a common theme in competence models such as the Dreyfus model [6].

Measurement

· Performance-based assessments: scenario tasks scored against a rubric; pre/post comparisons used for learning gain [3].

· Observed practice: sandbox simulations or supervised live transactions, with error types recorded for targeted coaching.

· Work-sample reviews: inspection of completed records, forms, or outputs against a checklist and quality standard.

Pitfalls and common confusions

· Treating attendance or completion of certificates as proof of skill.

· Testing recall instead of end-to-end performance (skills gaps often appear in workflow integration).

· Assuming errors are learner deficits when they may reflect poor tool design or missing job aids.

Skills is often confused with Usage when any activity is treated as competence; users may ‘use’ a system while producing low-quality outputs. Skills is also confused with Proficiency when only basic tasks are assessed and performance under complexity is ignored.

Usage

Usage is the extent to which the new way of working occurs in daily operations at the expected frequency, coverage, and timeliness. In technology change, actual system use is often treated as a behavioral endpoint following intention and enabling conditions [4].

Measurement

· Workflow completion: rates of end-to-end process completion in the new way of working, segmented by role and site.

· Coverage and leakage: proportion of eligible work handled through the new process versus legacy or shadow methods.

· Telemetry and audits: feature use, task completion and exception rates, combined with periodic manual validation.

Pitfalls and common confusions

· Measuring only logins or ‘active users’, which can overstate meaningful use.

· Ignoring workarounds (shadow spreadsheets, delegated entry) that preserve old behaviors.

· Assuming non-use is resistance when access, role design, or incentives are misaligned.

Usage is frequently confused with Proficiency when volume metrics are treated as quality. It is also confused with Buy-In when practical blockers (permissions, time, equipment) suppress use despite positive attitudes.

Proficiency

Proficiency is stable, high-quality performance of the new behaviors, including efficiency, judgment and adaptability. It is commonly visible when individuals handle exceptions and sustain outcomes without intensive support. Proficiency is an explicit stage in the Dreyfus model [6] and often mediates whether change benefits appear in operational and customer metrics.

Measurement

· Outcome metrics: sustained trends in quality, rework, compliance defects, customer experience and business KPIs linked to the change.

· Efficiency and stability: cycle time, throughput, and variance reduction after the initial learning curve.

· Advanced-case checks and peer calibration: periodic review of complex scenarios and evidence of coaching others.

Pitfalls and common confusions

· Declaring success at go-live; early usage frequently precedes stable proficiency.

· Attributing weak outcomes solely to individuals when process or system defects constrain performance.

· Allowing reinforcement to decay; without feedback loops, performance can regress over time [2].

Proficiency is commonly confused with Skills because both relate to capability; the distinction is sustained quality and adaptability under complexity. It is also confused with Usage when activity is measured without considering error rates, rework and outcomes.

Using ABSUP data in practice

ABSUP measures are commonly assembled into a profile (by role, site or cohort) and used to select targeted interventions. The general pattern is to treat the lowest state as the bottleneck and to avoid ‘one-size-fits-all’ activities.

· Low Awareness: simplify and localize the narrative; strengthen two-way channels; verify comprehension.

· Low Buy-In: address value and fairness concerns; remove friction; involve skeptics in problem-solving; align incentives.

· Low Skills: increase guided practice; improve job aids; certify readiness; correct usability issues.

· Low Usage: remove legacy paths; fix access/workflow blockers; provide just-in-time support; audit leakage.

· Low Proficiency: implement coaching and feedback loops; tune processes; reinforce standards and outcomes [2].

References

Did this answer your question?