AI Use as a Skill
Dimension: Capability · Type: Foundation
Four signals panels increasingly look for in AI-assisted work: intentional use, judgment, transparency, appropriateness. The differentiator is no longer whether you used AI, it is how.
Introduced by Matt Valente (Digital Talent Acquisition and Talent Management Lead, UNICC) at the The Skills Shift session of the UN Inter-Agency Career Week 2026, on 8 May 2026. Matt anchored the framework in the concrete shift to AI-sandboxed assessments at UNICC, and connected it to the National Bureau of Economic Research finding that job seekers who used AI well were hired 18% more often than those who did not.
The framework
A piece of AI-assisted work is ready when the work itself, and the way you describe it, carries all four signals. Run any AI-assisted output through them before you submit, and use the same four to narrate AI work in motivation statements and duty descriptions.
When to use it
- Before submitting an AI-assisted technical test, written assignment, or portfolio sample for a UN role.
- When writing a motivation paragraph or CV bullet that describes AI work, so the description carries the signals a panel is now trained to spot.
- When auditing a piece of AI-assisted work you have produced internally, to decide whether it is portfolio-worthy or still raw.
- As a coaching template when reviewing a colleague’s AI-assisted draft.
The four signals
Intentional use. You chose to use AI deliberately, for a reason you can name in one sentence.
- Weak: “I used Claude for the analysis.”
- Stronger: “I used Claude to surface patterns across 40 partner reports because manual coding would have taken three days and the deadline was 24 hours.”
The signal is that the choice to use AI was intentional, not reflexive. You can articulate what AI gave you that the alternative did not.
Judgment. You exercised judgment on the output, not blind copy-paste. You questioned, refined, corrected, discarded.
- Weak: “Claude generated the brief and I reviewed it.”
- Stronger: “Claude’s first draft over-claimed three findings the data did not support; I rewrote those sections after re-checking the source tables, and adjusted the tone after a peer flagged it as too assertive for the audience.”
The signal is that AI produced a starting point, not an answer. The output reflects your judgment, the AI’s content carries your fingerprints.
Transparency. You can narrate the process. The reader, the panel, or the colleague can see what the AI did, what you did, where the seams are.
- Weak: “Final brief attached.”
- Stronger: “Final brief attached. The structure and first draft were generated by Claude from the prompt below; the data verification, the executive summary, and the recommendation section were rewritten by me after consultation with the country office team.”
In an AI-sandboxed test, transparency is automatic: the panel can see the prompts and iterations. Outside the sandbox, transparency is something you build into the artefact (a note at the end, a paragraph in the cover letter, a line in the duty description).
Appropriateness. AI was the right tool for this task at this scale. Sometimes it is not.
- Weak: “I use AI for everything I can.”
- Stronger: “I use AI for synthesis across long inputs and for structuring drafts under time pressure; I do not use it for confidential casework, for sensitive personnel decisions, or for tasks where writing through the problem is itself the point.”
The signal is that you have a working theory of when AI helps and when it gets in the way, and you can articulate it. This is closely tied to data privacy: appropriateness includes “is this content safe to put into this tool?“.
On the shift the signals reflect
The session was direct on what is changing. UNICC and others are running technical tests inside AI sandboxes where the panel sees the prompts, the iterations, and the judgment applied. The “old” test (“write me a policy brief in 45 minutes”) tells the panel very little when AI can produce a passable brief in three minutes. The new test is configured around the four signals: how the candidate prompted, what they ignored or changed in the output, how they narrated the choice, whether they reached for AI in the right places.
This filters back into application materials. Panels now read motivation statements and duty descriptions for the same four signals, not just for the fact that AI was mentioned.
Steps
- Take one AI-assisted output you have produced in the last two weeks (a brief, an analysis, a draft, a piece of code).
- Run the four-signal check:
- Intentional use: can I name in one sentence why I chose AI here?
- Judgment: what did I change, override, or reject in the AI’s output, and why?
- Transparency: could a reviewer reconstruct the process from what I produced?
- Appropriateness: was AI the right tool for this task at this scale?
- Strengthen the weakest signal. Most artefacts fail on transparency (the seams are invisible) or on judgment (the candidate accepted too much of the first draft).
- Translate the four signals into the application sentence. When you describe the AI work in a motivation statement or duty description, name the tool, the workflow, and the impact, and weave at least two of the four signals into the sentence.
- Use the same four signals to read others’ AI-assisted work. They are useful in panels and in peer review; if a colleague’s draft is uncomfortable to read, one of the four is usually missing.
Worked example
A staff member submits an AI-assisted technical test for a programme analyst role: a synthesis across 30 country reports into a regional brief.
Before applying the signals (raw submission):
“Used Claude to summarise the 30 reports and produced a regional brief. See attached.”
After applying the signals:
“Approach: I used Claude as a synthesis aid for the 30 reports because reading each manually would have exceeded the 90-minute window. I prompted it for a thematic synthesis across three angles (delivery delays, stakeholder dynamics, financing risks) and asked for citation back to the source report for each claim. I rejected three of the eight findings because the source citations did not actually support them, rewrote the executive summary to match the regional bureau’s voice, and added a recommendations section that Claude had not produced because it required local political judgment. The full prompt and three iterations are appended below for transparency. I would not have used AI for the recommendations section even if the time had allowed it; the political reading of the situation is the point of the exercise.”
Intentional use: the time constraint and the synthesis breadth are the explicit reasons. Judgment: three findings rejected, executive summary rewritten, recommendations not delegated. Transparency: prompts and iterations appended. Appropriateness: the candidate articulates where AI helped and where it would have been the wrong tool.
In an AI-sandbox assessment, much of this is visible to the panel automatically. The candidate’s job is to make the same signals legible in artefacts that are not in a sandbox: portfolio pieces, motivation paragraphs, duty descriptions on the CV.
Pitfalls
- Treating AI use as a confession. “I used AI” is no longer a hedge or an apology. It is a capability statement, but only when the four signals are present.
- Pasting URLs instead of narrating. A link to a hosted app or a published prompt is not a substitute for the four signals. Recruiters do not click links (partly for phishing reasons). The evidence has to live in the prose.
- Overclaiming intentionality after the fact. If you reached for AI reflexively, do not retrofit a strategic justification. Either the choice was intentional or it was not. Honest “I used it because it was the fastest path” beats invented strategy language.
- Hiding the seams. Transparency does not mean dumping the entire chat log. It means making the human contribution and the AI contribution distinguishable, with enough specificity that a reviewer can interrogate either one.
- Performing appropriateness. “I am cautious about AI for sensitive tasks” is hollow if the rest of the work shows no such care. Appropriateness has to be visible in what you actually did, not asserted.
- Confusing four signals with four bullets. The four signals should run through one or two sentences, not become a checklist on the page. The reader should sense them, not have to count them.
- Assuming sandbox testing replaces the signals in the artefact. Even in a sandboxed test, the candidate who also narrates intent and judgment in plain language outperforms the one who relies on the panel to read the trace. Outside the sandbox, the four signals must be in the prose.
When not to use it
When the role explicitly forbids AI use in the task (some roster tests, some sensitive policy roles). In that case, the question is whether to use AI at all, not how to narrate it. Read the task instruction carefully and ask the recruiter if it is ambiguous.
When the AI-assisted artefact is a quick internal note that nobody is reading as a capability signal. The four signals are for portfolio-quality work, not for every Slack message.
A note on the source
The four signals are the speaker’s distillation of how UNICC and adjacent UN entities are configuring AI-sandbox technical tests, and how panels are increasingly reading motivation paragraphs and duty descriptions for the same signals. The framing is consistent with the broader literature on responsible AI use in professional settings (responsible AI guidance from OpenAI, Anthropic, Microsoft, Google, and the UN’s own AI governance work all converge on similar dimensions). The specific four-signal collapse, and the prescription that they should run through the artefact and through the way the candidate narrates it, is the speaker’s contribution.
How I use it
Personal note pending. Davide to fill.
Related frameworks
- Capability Frontier, the four-level maturity scale for where you are in AI use; the four signals describe how you operate at any level.
- Capability + Outputs + Evidence, the parent rewrite formula; the four signals are how AI work specifically gets surfaced inside the capability and evidence components.
- Skills-First Approach, the broader stance that frames AI use as a capability in itself, not an extra.
- Skills-in-Use CV Pattern, the pattern for translating AI work into application prose.
Notes compiled by Davide Piga. Last updated 2026-05-09.