Jump to content

HALF

From The Total Rewards Wiki

The Human–AI Levelling Framework (HALF): An AI-Ready Global Job Levelling Methodology

DRAFT version 0.1

Executive Summary

Introduction and Purpose The Human–AI Levelling Framework (HALF) is a comprehensive, global job levelling methodology designed for the modern, AI-augmented workplace. Its purpose is to provide a systematic, analytical, and legally compliant process for determining the relative contribution of all jobs, whether performed by humans, AI, or in collaboration. By establishing a transparent and common language for value, HALF serves as the strategic foundation for critical people programs, including workforce planning, talent management, career pathing, and, most importantly, fair and equitable compensation.

Core Architecture and Key Differentiators HALF is built on a dual-core architecture that integrates enduring legal principles with the realities of 21st-century work.

  • Legally Compliant Foundation: The framework is built upon the four globally recognized, gender-neutral compensable factors: Skills, Effort, Responsibility, and Working Conditions. This ensures inherent alignment with major pay equity legislation, including the EU Pay Transparency Directive and the U.S. Equal Pay Act.
  • Future-Ready Overlays: Layered upon this core are two innovative differentiators to measure new forms of value:
    • The Automation Stewardship Overlay: This provides a sophisticated lens for evaluating how a role interacts with intelligent systems, moving beyond basic use to encompass collaboration, orchestration, and architectural design.
    • The Critical Thinking Dimension: This explicitly values the non-automatable cognitive skills—such as nuanced judgment, ethical reasoning, and systemic analysis—that are becoming the paramount differentiators of human contribution.

Alignment with Global Equal-Value Legislation HALF is explicitly designed to meet and exceed the requirements of major international pay equity laws. Its architecture directly addresses the core tenets of the EU's Pay Transparency Directive (2023/970), the International Labour Organization's (ILO) Equal Remuneration Convention (C100), and the U.S. Equal Pay Act of 1963. By grounding its methodology in these universally recognized legal principles, HALF provides a globally consistent and legally resilient foundation for all compensation-related decisions.

Core Design Principle The framework operates on a single, guiding principle: “As routine cognitive and physical tasks are progressively automated, the primary differentiators of human value shift to critical thinking, contextual understanding, and the effective stewardship of intelligent systems, in addition to core domain expertise.” This philosophy is embedded throughout the framework's factor definitions and scoring anchors, ensuring it remains relevant and effective for the next generation of work.


1. Executive Summary

Framework Purpose and Strategic Value

The Human–AI Levelling Framework (HALF) is a comprehensive, global job levelling methodology designed to provide a strategic foundation for an organization's talent and reward architecture in the age of artificial intelligence. It serves as a systematic and analytical process to determine the relative contribution of all jobs—whether performed by humans, AI, or in collaboration—thereby creating a coherent and defensible internal value hierarchy. The primary purpose of the HALF is to establish a common, transparent language for understanding and assessing value, which is essential for the effective design and administration of critical people programs. These programs include, but are not limited to, strategic workforce planning, talent acquisition, career enablement, learning and development, and, most critically, fair and equitable compensation. By moving beyond simple job classification, the HALF provides the core infrastructure necessary to manage a modern, dynamic, and globally distributed workforce.

Key Differentiators: Integrating AI on a Legally Compliant Core

The principal innovation of the Human–AI Levelling Framework lies in its dual-core architecture, which marries enduring legal principles with the realities of modern work. The framework is built upon the four legally mandated, gender-neutral compensable factors: Skills, Effort, Responsibility, and Working Conditions. This foundation ensures that the methodology is not only robust and objective but also inherently aligned with global pay equity legislation from its inception.

Layered upon this compliant core are two modern, forward-looking differentiators designed to accurately measure the new forms of value created in an AI-augmented workplace. The Automation Stewardship Overlay provides a sophisticated lens for evaluating how a role interacts with intelligent systems, moving beyond basic use to encompass collaboration, orchestration, and architectural design. Concurrently, the integration of a Critical Thinking Dimension within the core factors ensures that the framework explicitly values the non-automatable cognitive skills—such as nuanced judgment, ethical reasoning, and systemic analysis—that are becoming the paramount differentiators of human contribution. This integrated approach contrasts sharply with legacy systems, which often struggle to evaluate technology beyond a simple skill or tool, failing to capture its transformative impact on the very nature of work and responsibility.

Alignment with Global Equal-Value Legislation

The HALF is explicitly designed to meet and exceed the requirements of major international and national pay equity laws. Its architecture directly addresses the core tenets of the European Union's Pay Transparency Directive (2023/970), which mandates the use of objective, gender-neutral criteria to assess work of equal value. Similarly, the framework aligns with the International Labour Organization's (ILO) Equal Remuneration Convention (C100), the foundational global treaty requiring objective job appraisal to ensure equal pay for work of equal value. Furthermore, the four core factors of the HALF mirror the criteria of "skill, effort, responsibility, and working conditions" established by the U.S. Equal Pay Act of 1963, providing a defensible basis for pay decisions in that jurisdiction. By grounding its methodology in these universally recognized legal principles, the HALF provides a globally consistent and legally resilient foundation for all compensation-related decisions.

Core Design Principle

The Human–AI Levelling Framework operates on a single, guiding principle that informs every aspect of its design and application: “As routine cognitive and physical tasks are progressively automated, the primary differentiators of human value shift to critical thinking, contextual understanding, and the effective stewardship of intelligent systems, in addition to core domain expertise.” This principle recognizes that in a world where AI can generate answers, the premium is on the ability to ask the right questions, validate the outputs, and integrate them into a broader strategic and ethical context. This philosophy is embedded throughout the framework's factor definitions, scoring anchors, and innovative overlays, ensuring it remains relevant and effective for the next generation of work.

2. Design Principles and Architecture

Foundational Principles: Objective, Gender-Neutral, and Globally Consistent

The integrity and effectiveness of the Human–AI Levelling Framework are rooted in a set of non-negotiable design principles. These principles ensure that every job evaluation is conducted in a manner that is fair, consistent, and defensible across all functions and geographies.

  1. Objectivity: The framework is designed to evaluate the role and its requirements, not the individual performing it. All assessments are based on the standard of fully acceptable performance, disregarding the incumbent's personal characteristics, performance level, or current compensation. This focus on the job's intrinsic demands is crucial for eliminating subjective bias.
  2. Gender Neutrality: The factors, definitions, and scoring anchors are constructed to be inherently gender-neutral. They focus on the nature of the work to be performed, avoiding language or concepts that may be stereotypically associated with a particular gender. This principle is fundamental to achieving pay equity.
  3. Global Consistency: The framework uses a common set of factors and a universal point scale to evaluate all jobs worldwide. This creates a single, coherent architecture that allows for meaningful comparisons of job value across different business units, functions, and countries, providing a solid foundation for global talent management and reward programs.
  4. Analytical Rigor: The HALF employs a point-factor methodology. This approach, which breaks jobs down into defined factors and assigns numerical points to different levels of contribution, is recognized as a more analytical, detailed, and objective method for supporting pay equity compared to less structured methods like whole-job ranking or slotting.

The Four Pillars of Value: Skills, Effort, Responsibility, Working Conditions

The architecture of the HALF is built upon four foundational pillars. These are not arbitrary constructs but are derived directly from the core principles of major pay equity legislation around the world, including the ILO Convention C100, the EU Pay Transparency Directive, and the U.S. Equal Pay Act. By using these legally recognized compensable factors as its primary structure, the framework ensures its fundamental compliance and defensibility.

The selection of these four pillars is a deliberate architectural choice. An analysis of established, proprietary job evaluation methodologies—such as the Hay Group's "Know-How, Problem Solving, Accountability," Mercer's "Impact, Communication, Innovation, Knowledge," and Aon/Radford's "Knowledge, Problem-Solving, Interaction, Impact"—reveals that their factors are, in essence, customized interpretations and combinations of the four legal pillars. For example, "Know-How" and "Knowledge" are proxies for the legal concept of Skills. "Accountability" and "Impact" are facets of Responsibility. By building the HALF directly on the legal "source code" of equal value, the framework becomes more fundamental, transparent, and universally applicable. This approach allows legacy frameworks to be used as a "translation layer" for external market benchmarking rather than serving as the core internal architecture, thereby simplifying the model while strengthening its legal foundation.

The four pillars are:

  • Skills: The knowledge, competencies, and experience required to perform the role.
  • Effort: The mental, cognitive, and physical exertion required.
  • Responsibility: The accountability for outcomes, resources, and decisions.
  • Working Conditions: The physical and psychosocial environment in which the work is performed.

Innovative Overlays: The Automation Stewardship Tier and Critical Thinking Dimension

To ensure the framework accurately captures the nuances of modern, AI-augmented work, two innovative analytical layers are integrated with the four pillars:

  1. The Automation Stewardship Overlay (A0-A4): This overlay provides a structured way to assess the sophistication of a role's interaction with AI and automation. It creates a spectrum ranging from passively using predefined tools to actively architecting new human-machine systems. This element is not scored separately but acts as a critical lens for evaluating the Skills and Responsibility factors, ensuring that the framework recognizes and values the increasing demand for effective human oversight and direction of intelligent technologies.
  2. The Critical Thinking Dimension: This dimension is not a standalone factor but a meta-skill woven into the behavioral anchors of the Skills, Effort, and Responsibility pillars. As AI automates routine information processing, the human capacity for deep analysis, evaluation, verification, and self-regulated judgment becomes a primary driver of value. The framework explicitly measures the application of these cognitive skills in a work context, ensuring they are a key differentiator in determining job value.

The Dual-Track Architecture: Professional/Expert and Managerial/Leadership Paths

The HALF is explicitly designed to support dual career ladders, a recognized best practice for attracting and retaining high-value technical and subject-matter experts who may not desire or be suited for a traditional management path. This structure provides parallel and equally valued progression opportunities for both individual contributors and people leaders.

A critical feature of the HALF architecture is that both the Professional/Expert track and the Managerial/Leadership track are evaluated using the exact same four factors and point system. This ensures internal equity and a shared understanding of value across the organization. The differentiation between the tracks emerges naturally from the scoring process:

  • Professional / Expertise Track: Roles on this track create value through the depth of their knowledge, analysis, design, or innovation. They typically score highest on the Skills factor (reflecting deep or broad mastery) and the cognitive components of the Effort factor. Their Responsibility is primarily for the quality, originality, and correctness of their outputs.
  • Managerial / Leadership Track: Roles on this track create value through the coordination and orchestration of others (both human and digital agents). They score highest on the Responsibility factor (reflecting the scope of resources, teams, and business outcomes they are accountable for) and the interpersonal components of the Skills factor.

This unified approach demonstrates that an organization values both deep expertise and broad orchestration, allowing a Principal Engineer and a Director of Engineering to be placed at the same job level, with equivalent compensation potential, because their total contribution to the organization is deemed equal, albeit achieved through different means.

3. Factor Definitions and Scoring Methodology

The core of the Human–AI Levelling Framework is its point-factor evaluation system. Each of the four pillars—Skills, Effort, Responsibility, and Working Conditions—is scored on a scale of 0 to 25 points. The following sections provide the modernized definition for each factor and the detailed scoring matrix with behavioral anchors that explicitly incorporate the context of an AI-augmented workplace.

Factor 1: Skills

Modern Definition: Consistent with legal definitions, "Skills" encompasses the experience, ability, education, and training required for fully competent performance of the job. The HALF modernizes this definition to recognize that in a contemporary work environment, skills are not merely about possessing knowledge (domain expertise) but also about the ability to interact with, manage, and leverage complex knowledge systems. This includes the technical and cognitive capabilities required to effectively question, guide, validate, and synthesize outputs from AI agents and other intelligent tools. The skill lies in the application of knowledge, whether human- or machine-derived, to solve problems and create value.

Measurement Indicators: The level of skill is measured by its depth (specialization), breadth (range of subjects), and the complexity of its application. This includes interpersonal and communication skills required for influencing and collaborating with others. The Automation Stewardship Overlay (see Section 4) provides a critical lens for assessing the sophistication of technology-related skills.

Table 1: Factor Scoring Matrix – SKILLS

Degree Level Level Descriptor Behavioral Anchors / Indicators Points
1 Foundational Requires basic literacy, numeracy, and procedural knowledge. Follows clear, step-by-step instructions using standard, predefined tools (e.g., data entry from a provided script). Requires direct supervision. 5
2 Applied Requires practical knowledge of established procedures and systems within a specific functional area. Applies learned skills to solve routine problems. Can operate common software and AI tools with provided templates and guidance (e.g., running a standard report, using a chatbot with a defined script). 10
3 Advanced Requires comprehensive knowledge of a technical or professional field. Can analyze and interpret complex information, identify root causes, and propose solutions. Independently uses advanced features of AI tools to generate novel outputs (e.g., crafting complex prompts for a generative AI, configuring a simple automation). 15
4 Expert Requires deep, authoritative knowledge in a specialized discipline or broad expertise across multiple related fields. Acts as a key technical resource for others. Develops and refines best practices for human-AI collaboration within their domain. Can diagnose and resolve highly complex, non-routine problems. 20
5 Pioneer Recognized as a leading authority internally and often externally. Pushes the boundaries of existing knowledge, creating new theories, methodologies, or technologies. Designs and architects novel human-AI systems. Defines the strategic application of knowledge to achieve long-term organizational goals. 25

Factor 2: Effort

Modern Definition: Legally defined as the amount of physical or mental exertion needed to perform a job. The HALF places significant emphasis on the cognitive and emotional dimensions of effort. As AI and automation absorb an increasing share of routine, repetitive, and physically demanding tasks, the nature of human effort shifts decisively toward sustained, high-stakes mental exertion. This "invisible" effort includes the deep concentration required for strategic analysis, the cognitive load of managing and synthesizing vast streams of information, the creative energy for innovation and complex problem-solving, and the emotional labor involved in leadership, negotiation, and high-stakes stakeholder management.

Measurement Indicators: Effort is measured by the intensity, duration, and frequency of the required exertion. It considers the complexity of the mental processes involved (e.g., analysis, synthesis, evaluation), the pressure of the environment (e.g., deadlines, consequences of error), and the need for emotional regulation and interpersonal engagement.

Table 2: Factor Scoring Matrix – EFFORT

Degree Level Level Descriptor Behavioral Anchors / Indicators Points
1 Low Work is primarily procedural and repetitive with limited variation. Mental attention is required for accuracy but tasks are straightforward. Minimal pressure from deadlines or complexity. Physical effort is light (e.g., standard office work). 5
2 Moderate Work requires frequent periods of concentration to perform varied tasks. Involves regular problem-solving within established guidelines. Must manage multiple tasks and competing short-term deadlines. May involve moderate physical exertion or sustained periods of visual/auditory attention. 10
3 Substantial Work requires sustained, intense concentration to analyze complex, multi-faceted problems. Involves high-pressure situations with tight deadlines and significant consequences of error. Requires significant mental stamina to synthesize disparate information, often from AI-generated sources, into a coherent strategy or solution. 15
4 High Work involves prolonged and intense mental exertion to address highly ambiguous, strategic, or novel challenges. Requires deep creative and analytical thinking under significant pressure. Involves managing high-stakes negotiations or resolving complex conflicts, demanding significant emotional regulation and cognitive flexibility. 20
5 Extreme Work requires exceptional and sustained cognitive and emotional fortitude. Involves making critical, high-impact decisions with incomplete information in volatile environments. Accountable for navigating enterprise-level crises or pioneering entirely new strategic directions, demanding the highest levels of mental resilience and focus. 25

Factor 3: Responsibility

Modern Definition: Consistent with legal definitions, "Responsibility" is the degree of accountability required in the performance of the job. The HALF expands this concept to encompass not only accountability for final outcomes (e.g., financial results, project delivery) but also for the processes, tools, and systems used to achieve them. In an AI-driven organization, this includes accountability for the ethical deployment of automated systems, the integrity and accuracy of algorithmic outputs, the management of data privacy and security risks, and the overall governance of human-machine decision-making processes.

Measurement Indicators: Responsibility is measured by the scope and impact of the role's decisions and actions. This is assessed through dimensions such as freedom to act (autonomy), the magnitude of resources controlled or influenced (e.g., budget, assets, revenue), and the nature of the impact on the organization's objectives.

Table 3: Factor Scoring Matrix – RESPONSIBILITY

Degree Level Level Descriptor Behavioral Anchors / Indicators Points
1 Task-Oriented Accountable for the accurate and timely completion of assigned tasks according to established procedures. Impact is limited to the individual's own work. Receives direct instruction and review. 5
2 Functional Accountable for a defined area of work or a small-scale project. May provide guidance to junior colleagues. Makes operational decisions within established policies. Responsible for the integrity of data or outputs from systems used (e.g., verifying AI-generated reports before distribution). 10
3 Managerial / Contributory Accountable for the performance of a team, a significant project, or a key business process. Manages budgets and resources. Makes decisions that have a measurable impact on a department's results. Accountable for the effective and ethical use of automation within their team's workflow. 15
4 Strategic Accountable for the overall results of a major function or business unit. Sets strategic direction and policies. Makes decisions with significant, long-term financial or operational impact. Accountable for the governance and risk management of critical AI systems that impact business outcomes. 20
5 Enterprise Accountable for the ultimate performance, strategic direction, and ethical integrity of the entire organization or a global business. Makes decisions that shape the future of the enterprise and have a broad impact on stakeholders, the market, and society. 25

Factor 4: Working Conditions

Modern Definition: This factor covers the physical surroundings and hazards of the job, as required by law. The HALF modernizes this pillar by formally including the psychosocial and cognitive environment as a compensable condition. This acknowledges that in modern knowledge work, significant hazards are not only physical but also psychological. These can include the stress from constant digital surveillance, the cognitive overload from managing incessant information flows, the pressure of an "always-on" culture, and exposure to sensitive, distressing, or harmful digital content.

Measurement Indicators: Working conditions are measured by the frequency and severity of exposure to unpleasant or hazardous elements. This includes physical risks (e.g., chemicals, machinery, extreme temperatures) as well as psychosocial risks (e.g., high-stress environment, exposure to traumatic material, risk of digital harassment).

Table 4: Factor Scoring Matrix – WORKING CONDITIONS

Degree Level Level Descriptor Behavioral Anchors / Indicators Points
1 Favorable Work is performed in a safe, comfortable, and controlled environment (e.g., a typical office). Minimal exposure to physical or psychosocial hazards. 5
2 Moderate Discomfort Work involves regular exposure to moderate levels of unpleasant conditions, such as noise, frequent interruptions, or uncomfortable physical positions. May involve some travel or irregular hours. Psychosocial environment may include moderate deadline pressure. 10
3 Unfavorable Work involves frequent exposure to unfavorable conditions that require special precautions, such as a factory floor, outdoor weather, or a high-stress customer-facing environment. May involve regular exposure to distressing digital content (e.g., content moderation) or high levels of cognitive overload. 15
4 Hazardous Work involves regular exposure to hazardous conditions with a risk of injury or illness, requiring mandatory safety equipment and procedures (e.g., handling hazardous materials, working at heights). Psychosocial environment involves high risk of exposure to trauma or extreme psychological stress. 20
5 Extreme Hazard Work involves constant exposure to life-threatening or highly dangerous physical or psychological conditions, where the consequences of a lapse in attention could be severe or fatal. Requires extensive, specialized safety and psychological resilience training. 25

4. The Automation Stewardship Overlay (A0-A4)

The Automation Stewardship Overlay is a critical, modernizing component of the Human–AI Levelling Framework. It is not a separate, scorable factor. Instead, it serves as an analytical lens through which the core factors of Skills and Responsibility are evaluated. It provides a standardized vocabulary to describe the sophistication and proactivity of a role's interaction with AI, automation, and other intelligent systems. As organizations increasingly deploy these technologies, the ability to effectively manage, direct, and improve them becomes a key differentiator of job value. The A-Tier of a role directly influences the degree level selected in the Skills and Responsibility scoring matrices. For example, a role requiring "Expert" knowledge (Skills Degree 4) at an A3 (Coach) level demonstrates a higher overall contribution than a role requiring the same knowledge at an A0 (User) level.

Defining the Tiers of Human-AI Collaboration

The overlay consists of five distinct tiers, representing a maturity model of human-AI collaboration, from passive consumption to active, systemic creation.

  • A0: User: The role operates predefined tools and systems as instructed. The primary activity is consumption of AI-driven outputs or following system-guided processes. The focus is on executing tasks within the existing technological framework.
  • A1: Collaborator: The role actively partners with AI tools to co-create novel outputs. This involves using advanced techniques like sophisticated prompt engineering, chaining multiple tools, and iteratively refining AI-generated content to achieve a goal that is more than the sum of its parts. The focus is on augmenting personal or team productivity.
  • A2: Orchestrator: The role manages and directs a portfolio of human and digital agents to achieve a complex outcome. This involves delegating tasks to the most appropriate agent (human or AI), integrating their outputs, and managing the overall workflow. The focus is on achieving project or process-level objectives through a hybrid team.
  • A3: Coach: The role is responsible for training, refining, and improving the performance of AI models and automated systems. This involves providing high-quality feedback, curating training data, defining new rules, and fine-tuning models to enhance their accuracy, relevance, and safety. The focus is on improving the capability of the technology itself.
  • A4: Architect: The role designs, commissions, and is ultimately accountable for new, integrated human-AI systems to solve fundamental business problems. This involves identifying opportunities for automation, defining system requirements, overseeing development, and managing the ethical and operational risks of deployment. The focus is on creating new organizational capabilities.

Mapping Stewardship to Factor Scoring

The following table provides guidance for evaluators on how a role's Automation Stewardship Tier should influence the scoring of the Skills and Responsibility factors. It connects the abstract concept of stewardship to the concrete mechanics of the job evaluation process.

Table 5: The Automation Stewardship Tier Matrix

Tier Tier Name Core Activity Example Behaviors Impact on Scoring (Skills & Responsibility)
A0 User Following Enters data into a system, runs a pre-built report, follows on-screen prompts from an expert system, uses a company-provided generative AI for basic information retrieval. Establishes the baseline score for Skills and Responsibility. The evaluation focuses on the core domain knowledge and task accountability, without significant enhancement from technology interaction.
A1 Collaborator Augmenting Uses advanced prompt engineering to generate marketing copy, debugs code with an AI assistant, synthesizes research from multiple AI-generated summaries, creates complex data visualizations using AI tools. Guides evaluator to a higher degree within the Skills factor. Reflects the ability to leverage technology to produce higher-quality or more complex outputs than would be possible otherwise. Minor upward influence on Responsibility.
A2 Orchestrator Directing A project manager assigns data analysis to an AI agent, content creation to a human writer, and quality control to another human, then integrates the results. A team lead manages a hybrid customer service team of humans and chatbots. Guides evaluator to a higher degree within the Responsibility factor, reflecting accountability for the outputs of a mixed team. Also influences the "interpersonal/communication" aspect of the Skills factor.
A3 Coach Improving An analyst provides feedback on incorrect AI-generated forecasts to retrain the model. A legal expert curates and labels a set of documents to fine-tune a specialized legal AI. A data scientist adjusts model parameters to reduce bias. Strongly guides evaluator to a higher degree within the Skills factor, as it requires deep domain expertise to effectively "teach" the AI. Also increases the Responsibility score due to accountability for the model's performance.
A4 Architect Creating A product manager defines the requirements for a new automated fraud detection system. An operations leader designs a new workflow that integrates robotic process automation with human decision points. A strategist identifies and champions the business case for a new AI-driven capability. Strongly guides evaluator to the highest degrees of both Skills (requiring systemic and strategic thinking) and Responsibility (accountability for the business impact, risk, and ethics of a new, complex system).

5. Critical Thinking as a Core Competency

In the Human–AI Levelling Framework, Critical Thinking is not treated as a standalone, scorable factor. To do so would be to isolate it from the context in which it is applied. Instead, it is recognized as a foundational meta-skill—a core competency that is embedded within the behavioral anchors of the Skills, Effort, and Responsibility factors. The increasing prevalence of AI makes critical thinking more, not less, important. As AI systems handle the "what" (information retrieval, data processing, content generation), the premium on human value shifts to the "so what" and "now what," which are the domains of critical thought.

The framework integrates a comprehensive model of critical thinking, with a particular emphasis on three components that are most vital in an AI-augmented environment:

  1. Questioning: The ability to formulate precise, insightful, and challenging questions to probe for deeper understanding, uncover hidden assumptions, and effectively guide AI tools.
  2. Verification and Tracking: The discipline of scrutinizing AI-generated outputs for accuracy, relevance, and bias. This includes cross-referencing sources, identifying "hallucinations," and maintaining an audit trail of the reasoning process.
  3. Self-Regulation (Metacognition): The capacity for introspection—monitoring one's own thinking for cognitive biases (like confirmation bias or automation bias), assessing the limits of one's own knowledge, and adjusting one's judgment and strategy accordingly.

Behavioral Anchors for Critical Thinking by Level

The expectation for critical thinking capabilities matures significantly as a role progresses through the job levels. This progression is a key differentiator between junior and senior roles and is explicitly built into the scoring logic of the framework. The following examples illustrate how critical thinking is integrated into the behavioral anchors across different levels:

  • Foundational Levels (L1-L3): At these levels, critical thinking is focused on immediate tasks and outputs.
    • Skills Anchor Example: "Identifies and flags obvious errors or inconsistencies in data provided by standard reporting tools or AI assistants (demonstrates basic Verification)."
    • Responsibility Anchor Example: "Is accountable for escalating issues or outputs that do not seem correct based on established procedures (demonstrates emerging Self-Regulation)."
  • Professional Levels (L4-L6): At these levels, critical thinking becomes more analytical and proactive, moving from error-spotting to questioning underlying assumptions.
    • Skills Anchor Example: "Analyzes and synthesizes information from multiple sources, including AI-generated reports, to identify underlying trends and relationships. Formulates clarifying questions to refine AI queries and improve output quality (demonstrates Analysis and Questioning)."
    • Effort Anchor Example: "Requires sustained mental concentration to evaluate the logic and evidence supporting an AI's recommendation before acting on it."
    • Responsibility Anchor Example: "Is accountable for the validity of the analysis presented, including a justification of the methods and sources used, both human and machine (demonstrates Explanation)."
  • Expert/Leadership Levels (L7-L10): At the highest levels, critical thinking is systemic, strategic, and highly self-aware. It involves assessing not just the output, but the entire system and its second-order consequences.
    • Skills Anchor Example: "Pioneers novel analytical frameworks that integrate human expertise and machine intelligence, defining the critical questions the organization should be asking (demonstrates advanced Questioning and Inference)."
    • Responsibility Anchor Example: "Makes high-stakes strategic decisions based on AI-driven insights after rigorously evaluating the model's limitations, potential biases, and second-order ethical and business consequences. Is accountable for mitigating systemic risks associated with the use of automated decision systems (demonstrates advanced Evaluation and Self-Regulation)."

By embedding these graduated expectations directly into the factor definitions, the HALF ensures that critical thinking is not just an abstract ideal but a measurable and compensable component of a job's value.

6. The Global Job Level Matrix (L0-L10)

The Global Job Level Matrix is the culminating output of the Human–AI Levelling Framework. It translates the analytical point scores derived from the factor evaluations into a clear, consistent, and transparent hierarchy of job levels that is applicable across the entire organization. This matrix serves as the master reference guide for career pathing, compensation structuring, and talent management. It provides employees and managers with a clear understanding of what is expected at each level of progression, regardless of their function or career track. The total point score for a job is calculated by summing the points from the four factors: Skills, Effort, Responsibility, and Working Conditions. This total score then maps to a specific level within the matrix.

Table 6: The Global Job Level Matrix (L0-L10)

Level Level Title Indicative Scoring Range Scope of Role Typical Responsibilities Expected Automation Tier Range Example Titles (Professional/Expert vs. Managerial/Leadership)
L0 Intern / Trainee N/A (Fixed Term) Learning-focused; performs assigned tasks under very close supervision to gain practical experience. - Assists with basic, routine tasks.
- Learns departmental procedures and tools.
- Conducts supervised research.
A0 Intern
L1 Associate 20–34 Performs routine, transactional, or operational tasks within well-defined procedures. Work is regularly reviewed. - Follows step-by-step instructions.
- Performs data entry and runs standard reports.
- Responds to basic internal/external queries.
A0 Associate Analyst, Junior Technician, Administrative Assistant
L2 Professional 35–49 Applies learned skills and knowledge to complete a variety of tasks. Works with some independence on routine assignments; requires guidance on new or complex issues. - Solves common problems using existing procedures.
- Manages personal workload to meet deadlines.
- Contributes to team projects as a member.
A0–A1 Analyst, Engineer I, HR Generalist, Staff Accountant
L3 Senior Professional 50–64 Possesses comprehensive knowledge of a professional discipline. Works independently on complex assignments. May provide informal guidance to junior colleagues. - Analyzes complex issues and proposes solutions.
- Manages small projects or significant parts of larger ones.
- Uses AI tools to augment analysis and content creation.
A1–A2 Senior Analyst, Engineer II, Senior Accountant, IT Business Partner
L4 Principal / Team Lead 65–79 Recognized as a subject matter expert. Handles the most complex, non-routine work. Influences others within the function. May lead small project teams or act as a formal team lead. - Develops new approaches and processes.
- Mentors and trains other professionals.
- Manages hybrid human-AI workflows for a team.
A1–A3 Principal Data Scientist, Lead Engineer, Finance Manager, Team Lead
L5 Senior Principal / Manager 80–94 Has deep expertise in a specialized area or manages a team of professionals. Accountable for the results of a team or a significant functional program. - Manages team performance and development.
- Controls departmental budget and resources.
- Defines best practices for using AI in their domain.
A2–A3 Senior Principal Engineer, Research Fellow, Engineering Manager, Senior HR Manager
L6 Director 95–109 Manages a major function or department, often through subordinate managers. Sets operational plans and policies with a medium-term impact on business results. - Directs the activities of a significant business area.
- Develops functional strategy and budget.
- Accountable for the performance of automated systems.
A2–A4 Distinguished Engineer, Director of Engineering, Finance Director
L7 Senior Director 110–124 Directs a large, complex function or multiple departments. Has significant strategic and financial accountability. Influences business unit or regional strategy. - Sets strategic direction for a major discipline.
- Makes decisions with long-term impact.
- Architects new human-AI capabilities for the function.
A3–A4 Senior Distinguished Engineer, Senior Director of Product, Regional General Manager
L8 Vice President 125–139 Leads a major global function or a small business unit. A key member of the senior leadership team. Accountable for enterprise-wide strategy and results. - Establishes enterprise-wide policies and strategies.
- Manages significant financial and human capital resources.
- Governs the ethical use of AI across the enterprise.
A4 VP of Engineering, Chief Technology Officer (in smaller firms), General Manager
L9 Senior / Executive VP 140–154 Leads a critical, global enterprise function or a large, complex business unit. Reports to the CEO and has ultimate accountability for a significant portion of the company's success. - Shapes the long-term vision of the company.
- Manages enterprise-level risk and opportunity.
- Represents the company to external stakeholders.
A4 Executive Vice President of a major business line, Chief Financial Officer
L10 C-Suite 155+ Sets the ultimate vision, strategy, and direction for the entire organization. Accountable to the Board of Directors and shareholders for the company's overall performance. - Defines corporate mission and values.
- Ensures long-term viability and sustainability.
- Manages the overall enterprise ecosystem.
A4 Chief Executive Officer, Chief Operating Officer

7. Applying the Dual Career Tracks

The dual-track architecture is a cornerstone of the Human–AI Levelling Framework, designed to provide equitable and parallel career progression for employees on both the Professional/Expert and Managerial/Leadership paths. Both tracks are measured against the same four compensable factors, ensuring that the organization's definition of "value" is applied consistently. The distinction between the tracks lies not in the factors themselves, but in how a role accumulates points across them, reflecting different but equally important contributions to the organization's success.

Valuing Depth vs. Breadth: The Professional/Expert Track

The Professional/Expert track is designed for individuals who create value through deep subject matter expertise, innovation, analysis, and the creation of intellectual property. Their career progression is marked by an increasing depth or breadth of knowledge and a growing influence on the organization's technical or strategic direction, without the requirement of formal people management.

A high-level role on this track, such as a Principal or Distinguished Engineer, achieves a high total score primarily through exceptional ratings in the Skills factor. This reflects their mastery of a complex domain and their ability to solve problems that are beyond the capabilities of others. They also score highly on the cognitive aspects of the Effort factor, due to the intense concentration and creative synthesis required for their work. Their Responsibility is significant but is focused on the impact and integrity of their technical contributions, designs, or strategic advice rather than on managing people or large budgets.

Valuing Orchestration: The Managerial/Leadership Track

The Managerial/Leadership track is for individuals who create value by setting direction, allocating resources, and developing the capabilities of others—both human and digital. Their progression is marked by an expanding scope of influence and accountability, leading larger and more complex teams, functions, or business units.

An equivalent high-level role on this track, such as a Director or Vice President, achieves a high total score primarily through exceptional ratings in the Responsibility factor. This reflects their accountability for significant business outcomes, financial results, and the performance of a large number of people. They also score highly on the interpersonal and strategic aspects of the Skills factor, such as communication, influence, and organizational planning. While they must possess sufficient domain knowledge to lead effectively, their technical depth may be less than that of their expert-track counterpart at the same level.

Illustrative Scoring Scenarios

To demonstrate how the framework equitably values these different contributions, consider two distinct roles—a Principal Scientist and an Engineering Manager—both evaluated at Level 5 (L5), which has a scoring range of 80-94 points.

Factor Principal Scientist (L5) - Expert Track Engineering Manager (L5) - Managerial Track Rationale for Difference
Skills 22 18 The Scientist scores higher for deep, near-pioneer level technical expertise. The Manager has advanced domain knowledge combined with strong people leadership skills.
Effort 20 20 Both roles require high levels of sustained cognitive exertion for complex problem-solving and strategic planning, respectively.
Responsibility 18 22 The Manager scores higher due to direct accountability for a team's performance, budget, and delivery. The Scientist has significant responsibility for project/program impact.
Working Conditions 5 5 Both roles operate in a standard, favorable office environment.
Total Score 85 85 Both roles fall within the L5 range (80-94 points), demonstrating how different contributions can be of equal value to the organization.

This example demonstrates how two distinct roles can be of equal value to the organization, thereby justifying their placement at the same job level and within the same compensation band.

8. Implementation Toolkit

To ensure the consistent and effective application of the Human–AI Levelling Framework, the following tools and guidelines are provided for use by Human Resources professionals, compensation analysts, and business leaders.

Job Evaluation Scoring Template

This template should be completed for every role evaluation to ensure all factors are considered and a clear audit trail is maintained.


Human–AI Levelling Framework – Job Evaluation Scoring Sheet

  • Role ID: _______________
  • Job Title: _______________
  • Job Family: _______________
  • Evaluator(s): _______________
  • Date: _______________

Final Assessment:

  • Level: L__
  • Automation Tier: A__

Factor Scoring:

  • Skills: ______ / 25
  • Effort: ______ / 25
  • Responsibility: ______ / 25
  • Working Conditions: ______ / 25
  • Total Score: ______ / 100

Justification Notes (Briefly explain the reasoning for each score, referencing the behavioral anchors):

  • Skills:
  • Effort:
  • Responsibility:
  • Working Conditions:

Additional Contextual Data:

  • Critical Thinking Level (Low/Med/High): _______________
  • % Tasks Automated (Now / 2y / 5y): ____% / ____% / ____%
  • Human-in-the-Loop Criticality (0–3): ___ (0=None, 1=Review, 2=Intervention, 3=Constant Oversight)
  • Risk Tier (Operational, Financial, Reputational): Low / Med / High
  • Data Sensitivity (Public/Internal/Regulated): _______________

The Human–AI Levelling Framework is architected with global legal compliance as a core design principle. Its structure, factors, and methodology directly align with the foundational requirements of major pay equity legislation, providing a robust and defensible basis for compensation decisions worldwide.

Mapping to EU Directive 2023/970

The EU Pay Transparency Directive requires employers to establish and maintain pay structures based on objective, gender-neutral criteria to ensure equal pay for equal work or work of equal value. The HALF directly satisfies this mandate:

  • Objective, Gender-Neutral Criteria: The framework's four pillars—Skills, Effort, Responsibility, and Working Conditions—are the explicit criteria cited within the directive's supporting principles for assessing the value of work.
  • Transparency and Justification: The point-factor methodology generates a quantitative, documented output for every job evaluation. This provides the objective evidence required to justify any pay differentials between roles that are determined to be of equal value.
  • Comparability of Dissimilar Jobs: The framework's ability to assign a value score to any job, regardless of its function, allows for the comparison of dissimilar jobs (e.g., comparing a role in a female-dominated function like HR to one in a male-dominated function like Engineering), which is a central requirement for identifying and remedying systemic pay gaps.

Satisfying ILO C100 Principles

The International Labour Organization's Convention C100 is the global standard for equal remuneration. Its central tenet, outlined in Article 3, is the promotion of an "objective appraisal of jobs on the basis of the work to be performed". The HALF is a direct implementation of this principle:

  • Objective Appraisal: The framework's structured methodology, detailed factor definitions, and graduated behavioral anchors provide the mechanism for a systematic and objective appraisal of every role.
  • Focus on Work Performed: The entire evaluation process is focused on the requirements of the job itself—what it takes to perform the work competently—entirely separate from the characteristics of the person holding the job.

Parallels with the U.S. Equal Pay Act

The U.S. Equal Pay Act of 1963 (EPA) prohibits sex-based wage discrimination for jobs that require "substantially equal skill, effort, and responsibility, and which are performed under similar working conditions". The four pillars of the HALF are a direct reflection of these legally defined factors:

  • Direct Factor Alignment: The framework's core factors—Skills, Effort, Responsibility, and Working Conditions—map directly to the language of the EPA statute.
  • Substantially Equal, Not Identical: The methodology allows for a holistic assessment of job content. This enables evaluators to determine if two jobs are "substantially equal" even if their tasks are not identical, which is consistent with how U.S. courts interpret the EPA.

Guidance for Joint Pay Assessments

Under the EU Directive, if a company's pay reporting reveals an average gender pay gap of 5% or more within any category of workers that cannot be justified by objective, gender-neutral factors, a "joint pay assessment" with employee representatives is mandatory. The HALF provides the ideal analytical tool for conducting this assessment:

  • Data-Driven Analysis: The framework provides a point score for each of the four factors for every job. In a joint assessment, these scores can be used to compare roles and identify where value is deemed equivalent.
  • Identifying Work of Equal Value: If two jobs have similar total point scores, they can be considered "work of equal value" under the framework, even if they reside in different functions. The committee can then analyze the pay levels for these jobs.
  • Objective Justification: If a pay difference exists between two jobs of equal value, the factor scores can help determine if there is an objective justification. For example, a higher score on the "Working Conditions" factor for one job could be an objective, gender-neutral reason for a pay differential, such as a hazard pay premium. The detailed, documented output of the HALF provides the transparent, factual basis required for a productive and compliant joint pay assessment.