Professor Emeritus of Corporate Communication, University of Huddersfield
Big question: how do we ensure that we protect those human freedoms that we all cherish? Well it all goes back to ethics: we set boundaries on AI to preserve our freedoms. The European Union’s AI Act came into force in August 2024, the first transnational piece of legislation on AI and it raises some interesting questions for public relations practice.
Before starting to draught the Act, the EU decided to create an ethical framework1 which would guide the design the legislation. It’s worth looking at its six ‘ethics by design’ principles and applying them to public relations. It raises a few awkward questions!
The EU AI Act is values based. These correspond to the values affirmed in international and national law and are broadly based on doing good, not doing harm, human autonomy and principles of fairness and justice. I think public relations practice would agree, we’d sign up to that.
These are the six principles and I’ve used them to raise questions about public relations practice.
Principle 1. Respect for human agency
Human agency focuses on human autonomy, dignity and freedom. It states AI systems should not unduly limit or manipulate human decision-making or actions. People should remain in control over important decisions and actions affecting their lives and every individual is seen to have intrinsic worth.
In public relations, one of the reasons we use AI is to discover more and more about our stakeholders, so we can understand their behaviour and seek to influence them. A question we need to ask is whether this data collection, along with the widespread use of behaviour change techniques respects human agency, choice and autonomy.
Principle 2. Privacy, personal data protection and data governance
This is about the responsible collection, use and protection of personal data. This includes data minimisation i.e. collecting only necessary data, ensuring data accuracy and security and giving individuals control over their personal information.
I’m guessing most public relations people would agree with this principle, but again data minimisation is a challenge. For example, we know in the world of political communication that thousands of data points are collected about individuals so that campaign messages can be persuasive and personalised. Is this ethical?
Principle 3. Fairness
This aims to prevent discrimination and ensure fair treatment of all individuals and groups. It requires AI systems to avoid bias in their design data and outcomes. Fairness also means universal accessibility, ensuring AI systems are usable by all.
Turning to the tools public relations uses, there are lots of questions about the data sources that are used and whether they are complete and/or biased. But this principle raises questions about whether the digitally savvy are advantaged: social listening online is great, but what about those people who, for whatever reason, aren’t online: how do we make sure their voice is heard? Governments are moving to ‘digital by default’ communication and services, what are the implications here for access and fairness?
Principle 4. Individual, social and environmental well-being
AI systems should contribute positively to human, societal and environmental welfare. For individuals this means designing systems that enhance the quality of life, respect human rights, and promote physical and mental well-being. On a social level it involves thinking through how AI might affect social structures, democratic processes and cultural norms. Principle 4 also requires that the ecological impact of AI development and deployment is considered.
Questions to public relations practice: what about the idea of sustainable public relations? Using Generative AI gobbles up energy: as we produce more and more content using increasingly sophisticated visuals and sound what about the sustainability of our work?
Principle 5. Transparency
This refers to the ability to understand and explain how AI systems work, make decisions and use data. Also means open communication about potential risks and benefits which is crucial for building trust.
In public relations there is probably overall agreement that we should say when we are using AI to generate content. But what about public relations practise itself? It’s often called the dark art: working practises are not very transparent – off the record briefings? Telling the truth but not the whole truth? How would we meet the transparency test?
Principle 6. Accountability and oversight
This principle ensures that those in AI development and deployment take responsibility for their system’s actions and impacts. This means that there are clear lines of responsibility, routes for redress when harm is caused and processes for continuous monitoring and improvement. This principle emphasises the importance of auditability so that third parties can make independent assessments. Developers have to put in place logging and tracing mechanisms to track their decisions and actions.
Interesting to apply this principles to public relations. There are any number of scandals including the notorious UK Post Office cover-up of a faulty IT system which led to hundreds of innocent postmasters and postmistresses having their reputations and livelihoods ruined. In the inquiry into the scandal, there were accusations that public relations had been involved in covering up bad actions and denying responsibility all in the name of ‘reputation management’. How would the practice feel about implementing audit trails on public relations decisions? Would the profession would be happy about opening up decision-making to 3rd party scrutiny?
These six ethical principles used by the EU are there to preserve and safeguard the freedoms of human beings in a world of AI. It’s a prompt to public relations practise to update its ethical codes too. We can’t step up to the governance role open to us now if we don’t get our ethical house in order.
References
1. Floridi, L., Cowls, J., Beltrametti, M. et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5
2. CIPR (2024). State of the Profession 2024. London: Chartered Institute of Public Relations. https://cipr.co.uk/common/Uploaded%20files/Policy/State%20of%20Prof/CIPR_State_of_the_Profession_2024.pdf

Leave a Comment