MrPrompts

Leadership

How to Measure Your Team's AI Readiness

Updated April 2026 · By Wayne Cederholm

Before you roll out AI to your team, you need to know where they stand. This guide explains the five dimensions of AI fluency, how to measure them, and what to do with the results.

Why assessment comes before training

The number one mistake in AI rollouts is assuming everyone starts at the same place. They do not. In any team of 20 people, you will have 2-3 who are already using AI daily, 10-12 who have tried it once or twice, and 5-8 who have never used it at all.

If you put all 20 in the same training session, you bore the advanced users, overwhelm the beginners, and frustrate everyone in between. Assessment lets you segment your team and provide the right training at the right level. It also gives you a baseline so you can measure improvement. Our guide on AI change management covers the full rollout playbook once you have your assessment results.

The five dimensions of AI fluency

AI fluency is not a single skill. It is five interrelated capabilities that develop at different rates:

1. AI Awareness (20% weight). Does the person know what AI tools exist and what they can do? This ranges from "has never heard of ChatGPT" to "tracks AI developments and can explain trade-offs between models." Awareness is the foundation. Without it, nothing else happens.

2. Practical Usage (30% weight). Is the person actually using AI in their work? This is the highest-weighted dimension because it measures behavior, not knowledge. Someone who uses AI weekly for real work tasks is more fluent than someone who can describe every model but never uses them.

3. Critical Evaluation (20% weight). Can the person evaluate AI output and know when not to trust it? This is the safety dimension. A team that blindly trusts AI output is a liability. A team that knows when to verify, how to spot hallucinations, and which tasks require human judgment is an asset.

4. Building Capability (20% weight). Can the person create AI-powered tools, workflows, and systems? This goes beyond using AI to building with AI. Saving prompts, creating libraries, designing workflows, building knowledge bases. This is where compound value lives.

5. Leadership and Culture (10% weight). Does leadership support and model AI usage? This dimension assesses the environment, not the individual. A team with high individual fluency but resistant leadership will stall. A team with supportive leadership and low individual fluency will grow.

How to run the assessment

Step 1: Team lead scores the rubric. Have each team lead rate their team (not individual members) on each dimension using the 1-5 scale. This takes 10 minutes.

Step 2: Individual survey. Send the 10-question survey to every team member. Anonymous responses are more honest. This takes each person 5 minutes.

Step 3: Score and segment. Average the rubric scores using the weights above. Cross-reference with survey responses to validate.

Step 4: Interpret and plan. Teams scoring 3.5+ are ready for AI pilots. Teams scoring 2.0-3.5 need foundational training first. Teams below 2.0 need executive sponsorship and a longer runway.

The full rubric, survey questions, and scoring guide are available in our Team AI Fluency Assessment download.

What to do with the results

High scorers (3.5+): These are your champions. Give them the tools and time to build. They become mentors for the rest of the organization. Start your AI pilot with this group.

Middle scorers (2.0-3.5): These are the majority. They need structured training: a 90-minute AI literacy session, hands-on workshops, and a buddy from the high-scorer group. They will get to 3.5+ within a month of supported use.

Low scorers (below 2.0): Do not push AI on this group first. Focus on awareness and comfort. Let them watch colleagues succeed before asking them to participate. Resistance is often fear, and fear dissolves with proximity to positive examples.

Frequently asked questions

How often should we reassess AI fluency?

Quarterly for the first year of AI adoption, then semi-annually. AI capabilities change fast. A team that was advanced 6 months ago may be behind if they have not kept up with new tools and techniques. Regular assessment keeps training targeted and reveals regression early.

Should the assessment be anonymous?

The individual survey should be anonymous to get honest answers. The team-level rubric scored by the team lead is not anonymous. Combining anonymous individual data with named team-level assessment gives you the fullest picture without making anyone feel exposed.

What if leadership scores low on the leadership dimension?

This is the most common and most important finding. If leadership is passive or resistant, individual training will not stick. Address leadership first with an executive briefing that focuses on business impact and competitive risk, not technology features. Leaders need to see AI as a business decision, not an IT project.

Get the complete assessment

Full rubric, 10-question survey, and scoring guide. Free download. For organizations that need hands-on support, see our enterprise training programs.

Download the Assessment

Subscribe to the MrPrompts Newsletter

Join 5,000+ builders. One practical AI framework every week: prompt templates, workflow blueprints, and knowledge base strategies you can use the same day. Free.

Keep exploring