Agreement Infrastructure Scan

Role Assessment Module — v0.1

When roles are unclear, the damage is quiet, until it isn't.

Most role problems don't announce themselves. They show up as decisions that take too long, work that falls through the cracks, or capable people who quietly disengage. By the time the problem is visible, the cost is already there, in coordination overhead, missed handovers, and accountability that lives in one person's head instead of the organization's structure.

Role clarity is not a soft topic. It is an infrastructure question. And like any infrastructure, you only notice it when something breaks.

The Agreement Infrastructure Scan (AIS) is a research framework I am developing as part of my doctoral work. It makes coordination quality observable and measurable, starting with roles. The Role Assessment Module is the first instrument in that framework. It does one specific thing: it measures the gap between what a role is supposed to look like and what it actually looks like in practice, as seen by both the person in the role and someone close enough to observe it.

The result is not a score. It is a structured picture of where role agreements are enacted and where they exist only on paper. That picture is useful in conversations about accountability, in succession planning, and in any situation where you need to know whether the organization actually functions the way it thinks it does.

This is a research instrument, not a consulting product. I use it in my own fieldwork; I have tested it, and I am sharing it here freely because the underlying problem is real, and the bar for accessing a rigorous diagnostic tool should not be high. Use it under the terms described in the site's disclaimer and permissions pages. No advisory relationship is created by downloading these files.

More modules are in development, covering how work is actually coordinated across roles, how authority is held and transferred, and where organizations carry structural risks they haven't yet identified. Watch this space.

Three files. One structured conversation.

The module runs as a two-person session. Here is what each file does and what needs to be in place for it to work.

Facilitator Card

A single-page reference card that explains the session to both participants before they start. It covers the rating scale, the five dimensions, the condition codes, and the five steps of the process. Send it ahead of the session; no preparation is required from either participant, but having the logic visible reduces friction in the room.

Works when: Both participants have read it before sitting down together. The facilitator, whoever is running the session, is comfortable holding the process without explaining the theory. This is not a therapy card or an HR feedback form. It is a coordination diagnostic. That framing needs to be set before the session starts.

Assessment Worksheet — Part 1

The working document for the session. The role holder and the assessor each complete the five-dimensional worksheets independently, grounding every rating in something they can actually observe, not in what they believe or intend, but in what they can point to. The worksheet then guides a structured comparison. It produces five ratings, five condition codes, and a pattern summary that feeds directly into Part 2.

Required fields are marked ●. Everything marked ○ is interpretive context, useful orientation, but not needed for the data transfer. The document is intentionally comprehensive; facilitators who want a shorter version can use only the ● fields.

Works when: Both participants are willing to name observable evidence rather than give a number. If either party is not ready to be specific, the ratings will be meaningless. The session also requires genuine independence; both worksheets must be completed before comparing answers. If the two parties discuss first, the instrument's core logic is broken.

Analysis Workbook — Part 2

The Excel companion that receives the outputs from Part 1. Enter the five role holder ratings, five assessor ratings, and five condition codes into the Input tab; everything else calculates automatically. The workbook produces an enactment gap analysis, a radar chart overlaying both profiles, risk flags by dimension, and a pattern summary that names the dominant agreement debt configuration. The Conservative Risk Signal field at the end is the output you can take into a stakeholder conversation.

Works when: The ratings transferred from Part 1 are grounded in named evidence. Numbers without evidence are not agreement-grade data; the workbook will calculate, but the result will not be meaningful. The workbook is a visualization tool, not an assessment tool. The assessment takes place in the room in Part 1.

Previous
Previous

Networks & Ecosystems