top of page

Proper Snacks Keynote

Industry: Food and Beverage Services

Date: 26 February 2026

Consultant: Rob Dixon

Professionals attending a keynote presentation on AI strategy, with a speaker presenting a framework on a screen in a modern conference setting

Defining Human and Machine Roles at Proper Snacks

Senior leaders and wider team members at Proper Snacks convened in London for a keynote session focused on a practical question facing many organisations: how to define the boundary between human responsibility and AI capability. Delivered by Rob Dixon, the session introduced the Purpose – Execution – Judgement (PEJ) framework as a way to structure that conversation. The outcome was a clearer understanding of how AI can be applied without losing human accountability, particularly as the organisation continues to build its AI literacy.

The session took place at a point where many organisations in the retail sector are moving beyond awareness and into early-stage literacy and exploration. Within the AI Transformation Playbook, this aligns most closely with Stage 3: Organisational AI Literacy, where the priority is developing a shared understanding of how AI works and how it should be used in practice. For Proper Snacks, the session provided a structured way to interpret this stage, focusing not just on tools, but on roles, responsibility and decision-making.

Objectives of the Event

The session was designed to:

  • Introduce a clear framework for human–AI collaboration

  • Help participants understand where AI should be applied and where human input remains critical

  • Build confidence in using AI without compromising judgement or accountability

  • Support the development of a shared language around AI across the organisation

What Happened During the Event

The keynote centred on the PEJ framework, which separates AI-enabled work into three distinct components: Purpose, Execution and Judgement.

Participants explored how Purpose remains a human responsibility, requiring clear intent and direction before any AI interaction begins. Execution was discussed as a shared space, where AI can increase speed and scale, but only when guided effectively. Judgement was positioned as the final human role, where outputs are evaluated, selected and aligned to business context.

The session connected these concepts to everyday work, helping participants move beyond abstract discussions of AI and towards practical application. Rather than focusing on specific tools, the emphasis remained on how decisions are made, how outputs are evaluated and how responsibility is retained.

This framing allowed the audience to consider AI not as a replacement for existing processes, but as a layer that changes how work is structured and how decisions are reached.

Key Insights and Takeaways

Several patterns emerged from the discussion that are relevant beyond a single organisation.

First, many teams are already using AI in execution, often informally, but without a clear structure for purpose or judgement. This creates inconsistency in output quality and makes it harder to scale adoption.

Second, clarity of purpose remains the most common gap. Without a defined objective, AI outputs tend to drift, producing results that are technically correct but misaligned with business needs.

Third, judgement is becoming more visible as a differentiator. As AI increases the volume of possible outputs, the ability to select the right one becomes more important than the ability to generate options.

These observations reinforce the need for organisations to build AI literacy alongside decision-making discipline, rather than treating AI purely as a productivity tool.

Impact

The immediate impact of the session was a shared understanding of how to approach AI use in a structured way. Participants left with a clearer view of where AI can add value and where human input must remain central.

The PEJ framework provided a practical reference point that can be applied across different functions, supporting more consistent use of AI and reducing ambiguity around responsibility.

This clarity is particularly important at the literacy stage of transformation, where organisations are building confidence and forming early habits that will shape later adoption.

What Happens Next

Following the session, the next logical step is to build on this shared understanding through continued AI literacy development and applied experimentation.

Within the AI Transformation Playbook, this progression typically moves from Stage 3 (Organisational AI Literacy) into Stage 4 (AI Application and Experimentation), where teams begin to apply AI to real workflows and test where value can be created.

For Proper Snacks, this would involve moving from conceptual understanding towards structured use, ensuring that Purpose, Execution and Judgement remain aligned as AI adoption increases.

Closing Insight

The question of where human responsibility ends and AI begins is becoming central to how organisations operate. Frameworks such as PEJ provide a practical way to navigate this shift, keeping decision-making grounded while allowing AI to extend execution.

In retail environments where speed, consistency and brand alignment matter, this balance is particularly important. The organisations that manage it well will not only adopt AI more effectively, but will do so in a way that strengthens rather than dilutes their decision-making.

AI transformation in retail begins with shared understanding, structured thinking and the confidence to apply AI responsibly in everyday work. Book Rob For Keynote Session


Related Posts

See All
AI4SLT Leadership Workshop – Philips Law

Philips Law partnered with Dixon AI to deliver an AI4SLT leadership workshop in London, helping its senior team build a clear, practical understanding of artificial intelligence in a legal context. Th

 
 
 

Comments


bottom of page