Capability Frontier
Dimension: Capability · Type: Foundation
A four-level maturity scale for AI use, plotted across capability areas rather than across job levels: Explorer, Adopter, Practitioner, Builder. Locate yourself honestly, pick one or two adjacent moves, do not try to advance everywhere at once.
Introduced by Matt Valente (Digital Talent Acquisition and Talent Management Lead, UNICC) at the The Skills Shift session of the UN Inter-Agency Career Week 2026, on 8 May 2026. Matt ran a live version of UNICC’s AI Skills Shift assessment with the audience, narrated the resulting plot in real time, and used the User vs Operator pattern to argue that the right development move is area-specific rather than global.
The framework
The frontier runs across two axes at once: frequency (how often you use AI) and depth (how integrated and consequential the use is). The point is not to reach Builder in everything; it is to know where you are and to move with intent.
When to use it
- When you are deciding where to invest learning time on AI and need a way to choose between ten plausible directions.
- After completing the AI Skills Shift assessment, to interpret the result instead of treating it as a score.
- During an annual or quarterly capability review with a supervisor, to make a concrete development conversation rather than a generic “I should learn more AI”.
- When mapping a team’s AI maturity, to target a small number of high-impact moves instead of broad upskilling.
The four levels (plus a baseline)
Not Yet. You are not really using AI in this capability area. Either the area does not yet feel relevant to your work or you have not had a starting point that stuck.
This is not a failure state. The session was clear that, on the 200-plus respondents to the live assessment, “Not Yet” was concentrated in process automation and integration/tooling, including from people who were Practitioners or Builders elsewhere. Honest “Not Yet” beats inflated “Adopter”.
Explorer. You are trying things. Occasional use, often experimental. You have prompted Claude or ChatGPT, you have looked at one or two tools, you have a sense of what AI feels like in your workflow but it has not yet changed how you operate.
The marker: the use is curiosity-led rather than workflow-led. You reach for AI when you remember to.
Adopter. You use AI regularly in this capability area, but at a relatively shallow depth. You have found a few prompts or tools that work for you, you reuse them, and they have removed some friction from your week.
The marker: the use is repeating but not yet integrated. AI is a tool you sometimes pick up; it has not yet become part of how the work is done.
Practitioner. You use AI deeply and routinely in this capability area. You have integrated it into a workflow you can describe to someone else. You can spot when AI is producing low-quality output and correct it. You can articulate what AI gives you that the alternative does not.
The marker: the use is integrated and discriminating. You reach for AI as a default, you also know when not to.
Builder. You design new AI-assisted workflows, build small tools, automate parts of your or your team’s work, and teach others. Multiple uses per day, advanced usage, and a portfolio of artefacts.
The marker: you are not just using AI, you are configuring how it operates in your context.
On the two axes
The session’s live assessment with about 200 respondents made the two axes legible. Two distinct positions emerged:
- Users: high frequency, lower depth. The “ICT” cohort sat here: using often, not deeply integrated, partly because legacy systems made deeper integration hard.
- Operators: lower frequency, higher depth. The “economic, social, development” and the “political/humanitarian” cohorts sat here: not using daily, but when they did, they used AI in deeper, more consequential ways.
Neither is “better”. The diagnostic value is in seeing which position you occupy and choosing the deliberate next move (an Operator might want more frequency; a User might want more depth).
Steps
- List your capability areas. Start with the assessment’s five (process automation, integration and tooling, verification and critical thinking, AI-augmented writing, data interpretation). Add areas specific to your work (synthesis across long inputs, cross-language drafting, data visualisation).
- Place yourself honestly on each area. Use the four levels. If you are unsure between two levels, place yourself at the lower one. The assessment overestimates by default, so calibrate down.
- Spot the asymmetry. Most people are Practitioner in one or two areas, Adopter in two or three, and Not Yet or Explorer in the rest. The asymmetry is the point.
- Pick one or two adjacent moves, not five. Adjacent means one level up in one area. From Adopter to Practitioner in AI-augmented writing is adjacent. From Not Yet to Builder in process automation is not.
- Define the move concretely. “Move from Adopter to Practitioner in AI-augmented writing” is too abstract; “build one repeatable workflow for synthesising weekly partner inputs into a brief, by mid-June” is concrete.
- Schedule a 20-minutes-a-week rhythm on it. A fixed weekly slot, one tool or one feature per week, for the duration of the move.
- Review at quarter-end. Place yourself again. If you have moved, lock the new level into how you describe yourself in applications. If you have not, change the move (the area, the cadence, or the artefact you were trying to produce).
Worked example
A staff member completes the assessment and gets the following plot:
- AI-augmented writing: Practitioner (uses Claude daily for drafting, has a prompt library, can spot weak output).
- Verification and critical thinking: Adopter (sometimes asks AI to challenge an argument, but does not have a workflow).
- Data interpretation: Explorer (has tried Power BI’s Copilot, does not yet trust the output).
- Integration and tooling: Not Yet (does not connect AI to other tools).
- Process automation: Not Yet (no automation built).
Without the frontier framing, this person is tempted to “learn more AI” everywhere at once. With it, the move becomes specific:
Move from Adopter to Practitioner in verification and critical thinking by end of Q3. Concrete artefact: build a prompt that takes a 5-page draft brief and produces a structured red-team critique (assumptions, weak evidence, alternative readings) that I run on every brief before sending. Cadence: 20 minutes every Tuesday for six weeks.
That is one move, in one area, with a defined artefact and a date. The other four areas are deliberately not in scope this quarter.
The adjacent move logic also explains what not to attempt. Going from Not Yet to Practitioner in process automation in one quarter is two-level jump. The person above would put process automation on the next quarter, with a Not-Yet-to-Explorer first move (build one small app, even badly).
Pitfalls
- Self-rating high to feel current. Most people overestimate their level by one. Calibration matters more than score.
- Trying to advance in five areas at once. Three months of distributed effort produces no movement anywhere. One month focused on one area produces a real shift.
- Treating Builder as the goal. Most UN roles do not need Builder-level AI use everywhere. Practitioner in the two areas your role actually depends on is more valuable than Adopter in everything.
- Confusing frequency with capability. Daily light use of ChatGPT does not make you a Practitioner. Depth, judgment, and integration into a workflow are what define the level.
- Skipping the area-by-area breakdown. A single global “I’m an Adopter” hides where the genuine impact is. The whole point is to make the asymmetry visible.
- Treating the assessment as a score rather than a diagnostic. The output is a starting position, not a verdict. Re-take it after each move; the trajectory matters more than any single placement.
When not to use it
When the role’s AI exposure is so low that the capability frontier is a distraction. Some highly specialised technical roles, some roles bound by strict no-AI policies (certain legal or ethics functions), or some roles where the work product is uniquely human.
When you are mid-application and tempted to “improve your level” before submitting. The frontier is a development tool, not an application sprint. Use it before the recruitment cycle, not during it.
A note on the source
The four-level scale (Explorer, Adopter, Practitioner, Builder, plus a Not Yet baseline) is from UNICC’s internally developed AI Skills Shift assessment, built by Matt’s team. The two-axis framing (frequency and depth, surfaced as Users vs Operators) is the speaker’s interpretation of the live assessment results from the session itself (around 200 respondents). The “adjacent move” prescription, and the “Practitioner in two areas beats Adopter in everything” line, are the speaker’s contribution rather than something derived directly from the assessment scoring rubric.
The assessment is openly available at https://ai-skills-shift.lovable.app and produces a personalised action plan. Treat the action plan as input to the steps above, not as a substitute for them.
How I use it
Personal note pending. Davide to fill.
Related frameworks
- AI Use as a Skill, the four-signal framework for how you use AI; the frontier is where you are, the four signals are how you operate at any level.
- Skills-First Approach, the broader stance that the frontier operationalises in the AI dimension.
- Skills Self-Audit, the broader skills inventory; the frontier slots in as the AI-specific column.
- Capability + Outputs + Evidence, the rewrite formula that surfaces the frontier level in application prose.
Notes compiled by Davide Piga. Last updated 2026-05-09.