Milk & Honey PR logo slate grey

AI Ethical Playbook

DOWNLOAD HERE

AN INTRODUCTION FROM DR. CHRISTIAN STIEGLER

NOW AND THEN. THE ETHICAL IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN THE PR INDUSTRY

 NOW…

In 2023, artificial intelligence (AI) made its foray into mainstream culture thanks to the popularity of tools such as ChatGPT, and – of all the people – The Beatles. To properly mix their new release ‘Now and Then’, the voice of the deceased John Lennon had been digitally isolated from a 1977 demo tape with the help of machine learning technology. 

Leave it to a band that broke up in 1970, two of whose members are no longer alive, to further enhance the popularity of AI and raise questions about its ethical implications. 

Most people seems to have some rough ideas of what AI means depending on the narratives in popular culture they were influenced by, ranging from literature, films, games, and now even The Beatles. 

In this way, the term ‘AI’ has quickly grown into an all-encompassing and catchy label with a shifting definition for many computational terms: neural networks, machine learnings, algorithms, data analysis, generative media, robotics, among others. Many of them have been in use for a very long time, and are already influencing us, for instance, on social media, streaming platforms, and search engines. 

Hence, the rapid fame of generative AI tools such as ChatGPT and Mid-journey – that create new media objects with neural networks which have been trained on vast data sets of media objects already in existence – is merely the next logical step of automation in the evolution of mass media technologies within popular culture. 

Regardless, recent developments in the AI market have surprised many, and left them both hopeful and concerned for the future. 

AND THEN….

 

While efforts such as the EU AI Act may arrive too late to provide sufficient guidelines for future developments, we need to quickly facilitate a discourse for a responsible, ethical and human-centred use of AI technology.

For instance, various companies already try to integrate AI in their Code of Conduct  to enhance data security and human autonomy. In that respect, Milk & Honey’s AI Ethical Playbook is a significant milestone for the PR industry, which is already massively affected by the developments mentioned above, in particular, by the field of generative AI. Based on the ICCO Warsaw Principles, the ‘AI Ethical Playbook’ is informed by a comprehensive set of influencing topics ranging from transparency to bias detection, human oversight, and education ensuring a diverse and interdisciplinary conversation. 

The playbook is dynamic by design to evolve with future findings. It is guided by a careful and considered approach that underlines that AI – like any other tool – can be used for good and ill, and that its ethics are not some already existing rules, but rather need to be negotiated time and time again with humanity as a guiding light. 

It is my hope that initiatives like these will improve digital literacy to develop a healthier, more inclusive, diverse, sophisticated, and reflective use of AI technology. You may say I’m a dreamer.  

CHRISTIAN STIEGLER - NOVEMBER 2023

PROF. DR. CHRISTIAN STIEGLER IS DIRECTOR OF GUIDING LIGHT - AN INTERNATIONAL ORGANISATION FOR ETHICS AND SUSTAINABILITY IN TECHNOLOGIES. HE WRITE AND SPEAKS EXTENSIVELY ON SUBJECTS SUCH AS XR, AI, TECHNOLOGY ETHICS, THE METAVERSE AND EMERGING TECHNOLOGIES.

DR. CHRISTIAN STIEGLER
EMAIL: CHRISTIANSTIEGLER@GUIDINGLIGHT.EU
WWW.CHRISTIANSTIEGLER.AT

AI: THE HEAVYWEIGHT DEBATE

AN OVERVIEW

ARTIFICIAL INTELLIGENCE (AI) IS ONE OF THE MOST POLARISING ISSUES FACED BY THE PR INDUSTRY – WHICH IS HARDLY SURPRISING, GIVEN IT’S ALSO ONE OF THE MOST POLARISING ISSUES WE FACE AS A SPECIES. 

TOO GOOD TO BE TRUE?

AN AI FUTURE

AI – PARTICULARLY GENERATIVE AI – APPEARS TO OFFER HUGE BENEFITS TO THE PR INDUSTRY, IT SOOTHES MANY OF THE SECTOR’S PAIN POINTS.

 

In the blue corner, we have those who believe AI will revolutionise our ability to communicate – and improve PR the for better and forever. In the red corner, there are those who believe it has the potential to destroy PR as we know it – with dire consequences for the industry, its clients, their customers and wider society. 

The two heavyweights have been slugging it out on socials, through blogs and in bylines – yet, neither has landed the knockout blow. The reason for this drawn scorecard is likely to be that the reality sits somewhere between the two. 

Like all tools, AI has the potential to be used for good and ill – and it is the decisions we make today that will decide the outcome for PR (and, perhaps, the species!). 

Milk & Honey PR has been keeping a careful eye on developments in the AI space – led by our AI Steering Group. Our aim is to ensure that we adopt the technologies and approaches that will benefit our people and clients, but are fully informed by potential short, medium and long-term implications.

  • This AI Ethical Playbook does not provide a definitive guide on how to approach AI – not least because every business is different and needs to develop its own approaches. 
 
  • This AI Ethical Playbook does show how Milk & Honey approaches AI – by necessity, it is a dynamic document that will change in step with technologies, industry guidelines and governmental regulation. 
 
We hope that this may provide a useful starting point for other businesses as they navigate a rapidly and radically changing PR landscape. 

TOO GOOD TO BE TRUE?

AN AI FUTURE

AI – particularly generative AI – appears to offer huge benefits to the PR industry. It soothes many of the sector’s pain points. Who, for example, would baulk at adopting technologies that deliver virtually instantaneous content generation and design capabilities that, at first glance, are virtually indistinguishable from that created by humans? Who would refuse a technology that leaps the (sometimes painful) creative and ideation processes in a single bound? Who wouldn’t want to save time, money and effort?

Sounds like a no-brainer, right? As any good PR will tell you, however, if it sounds too good to be true, it probably is.

AI generated content is only “virtually indistinguishable” from human outputs and even then, only at “first glance”. The very recent and significant leaps in generative AI technology has, perhaps, masked its shortcomings. As the hype dies down, most people will be able to differentiate between the human and the AI – and this ability to differentiate will prove to be crucial. 

Technology is the message

Our ability to differentiate may recede as technologies continue to advance. So, give it a year or two and it’s all good? Not really.

As communicators we are better placed than most to understand the subtle – almost subliminal – messages that our actions broadcast. Think about how we feel when we realise a product we’ve bought is actually fake. Think about how we feel when we discover the ‘person’ we’ve been live chatting to is actually a bot. Think about how we feel when we receive a ‘personal’ letter and notice that the signature has been cut and pasted in. 

Now think about how our clients and their audiences will feel when they read a thought leadership byline, or see a social post, or look at a campaign proposal and realise the ‘thinking’ behind them was actually provided by an algorithm. The overarching message is a lack of care; being taken for granted; even disdain. 

People want to feel valued – engaged as (and by) a human being. No matter how complex the machine, how advance the algorithm or how cutting edge the technology, people are the only ‘systems’ that provide human context, understanding and empathy. These, surely, are the fundamentals of good PR?

We need to understand that the technologies we use make up a big part of the message we share – and that message has to be respect. 

The human differentiator 

If, as an industry, we’re content to remove the human from the equation, we have to think through to its logical conclusion. 

While resource, time and cost benefits will be immediate, how long will they last? Clients use PR agencies for their expertise, media savvy, counsel and to nurture their reputations – harnessing their diversity of thought, audience understanding and creativity. If agencies willingly cast these differentiators aside by letting AI do it for them, why would a client retain a PR partner? They could easily cut out the middle-man and access the technologies directly. 

There’s a potential future out there where PR agencies are, at best, regulated to the status of a software hub. There’s a potential future out there where all PR sounds the same – relying on the same algorithms that scrape the same content from the same sources. There’s a future out there where PR becomes meaningless and irrelevant. To avoid these futures we must recognise that AI should be used to support humans and not replace them. Milk & Honey believes in a future where human-led PR is the differentiator – a symbol of excellence, engagement and respect. 

 

 

 

 

AN AI PRESENT

PRO OR ANTI AI STAND POINTS ARE LARGELY MOOT. AI IS HERE, IT’S HERE TO STAY, AND IT CAN BRING HUGE BENEFITS TO THE INDUSTRY

Milk & Honey’s AI Steering Group has been exploring tools that will help us to help our clients and their customers, providing a human/AI hybrid approach that is always led by the former and supported by the latter. We are deliberately taking our time here – pursuing a considered process – to get it right. 

Work to date has seen us look at AI solutions that can, for example, help us to:

  • Generate initial ideas for design: this type of AI solution can create a starting point, encompassing colour and brand language to create a range of visual identities. This would not replace a designer – rather it would support them to deliver early mock ups. We see this as speeding and reducing the cost of the design process – with benefits for the agency, its designers and clients. 
 
  • Content summary: this type of AI solution allows an agency to rapidly summarise large sections of text without employing a significant amount of human resource. This capability can help teams rapidly get to grips with an issue and identify salient points – again reducing agency resource and cost to the client, without compromising on quality. 
 
  • AI transcription tools: an agency pain point is often having to deploy a lot of team resource in, for example, a client meeting or story mining session. The ability to cut down on the number of human attendees will reduce costs to the client – removing the need for a dedicated human note taker brings benefits to both parties. An added benefit of an AI transcription tool is that it doesn’t miss anything being discussed – which humans often understandably do. 
 
  • Writing support tools: the most obvious example of these is ChatGPT, but we would only use this technology in very specific circumstances – where it adds and not detracts. These circumstances could include providing initial ideas to a human writer (e.g. to suggest story angles), to assist in research, or in the early stages of brainstorming.

AI CONSIDERATIONS, REGULATION AND GUIDANCE

Unintentional breach

It is important to recognise that even the most cautious approach to AI can present hidden dangers. Using it responsibly demands ongoing vigilance. 

One of the most important of these is the unintentional breach of non-disclosure agreements (NDAs). Many AI tools capture and keep the data that users put into them – and this data then, effectively, becomes public property. As such it can be used in other pieces of AI generated content for any other customer. Whether intentional or not, the agency may have breached its NDA with the client – with the potential to damage trust, destroy relationships and lead to possible legal action. 

As a first and necessary step, an agency should read – and comprehensively understand – the terms and conditions of the AI tools selected. Agencies will need to ensure that T&Cs keep client information confidential and that use doesn’t breach, for example, data protection regulations. It is also critically important that the client is fully informed of the AI tools that the agency uses, both generally, and in specific cases. 

Regulatory compliance

History teaches us that rapid advances in technology almost always outpace regulatory response. This is inevitable: regulators have to consider the implication of any regulations they impose carefully so knee jerk reactions are likely to be counterproductive. 

EU guidelines (not regulations), for example, date from 2-19 – and the AI environment has changed beyond all recognition since then. EU regulations are in the pipeline – first proposed in 2021, with significant amendments in 2023 in an attempt to keep pace. While the EU’s aim was to make this law by the end of 2023, it’s unlikely we’ll see them enacted in the near future. 

 

As we await regulation, there is the very real potential for different regulatory approaches in different national and trans-national jurisdictions. A post-Brexit UK, for example, is developing its own legislation, while there are signs that EU and US approaches are delivering – complicating compliance for PR agencies and their global clients. 

As regulation races to catch up, the PR and AI industries need to pursue a ‘Trust by Design’ approach. This is where safeguards are built in at the very earliest design and engineering stages to lessen the risk of misuse – both intentional and unintentional. 

While constant vigilance will still be needed, Trust by Design will help us all to ensure that trust, transparency and responsibility are always the default AI settings. 

Guidance

PR bodies have been quick to fill the vacuum with comprehensive guidance that will help agencies to stay on track through self-regulation. The ICCO, for example, ratified principles for the ethical use of AI in PR in October 2023 – known as the Warsaw Principles. 

While other PR organisations have published similar guidance, Milk & Honey sees the Warsaw Principles as comprehensive and indicative – and they inform our ongoing work in AI:

 

 

  • Transparency, Disclosure, and Authenticity: mandating clear disclosure when generative AI is employed, especially when crafting reality-like content.
 
  • Accuracy, Fact-Checking and Combating Disinformation: highlighting the need for rigorous fact checking, given AI’s potential for disseminating misinformation and producing disinformation. 
 
  • Privacy, Data Protection, and Responsible Sharing: prioritising data protection, compliance and dissemination. 
 
  • Bias Detection, Mitigation, and Inclusivity: advocating for the detection and correction of biases in AI-driven content and the promotion of inclusivity. 
 
  • Intellectual Property, Copyright Compliance, and Media Literacy: stressing the respect for intellectual property and copyright laws.
 
  • Human Oversight, Intervention and Collaboration: reinforcing the necessity of human oversight in AI-powered processes. 
 
  • Contextual Understanding, Adaptation, and Personalization: encouraging tailored content approaches for different audience channels.

NEXT STEPS:

Milk & Honey will continue to take a careful and considered approach to AI – informed by our AI Steering Group and the latest guidance and regulations. We will update this AI Ethical Playbook as we do so.

A key focus will be to train our own people so that they can adopt AI support tools with confidence – and this will be augmented with comprehensive policies and procedures. 

Any tool can be used for good or ill. The PR industry now has the opportunity to ensure that AI helps us to connect people and bring them together more effectively than ever before – while always retaining its essential humanity. 

Milk & Honey – December 2023