The struggle to remain up to date with new developments, to identify and stop the spread of harmful information and protect against increasingly sophisticated cyber-attacks can feel daunting and unrelenting. For in-house legal teams, though, a range of tools and strategies are available to help regain and retain control.
Misuse or unauthorised disclosure of personal data
In an era of cyber-attacks and large-scale data breaches, approaching data protection can be a stressful prospect for any business.
However, there are strategies available. While businesses do not have privacy rights in the same way that individuals do, they are protected by the law of confidence, meaning an unauthorised public disclosure of information may be actionable. As Amy Bradbury, senior associate in Harbottle & Lewis’ media and information group, states: ‘Having adequate security in place is the first issue – prevention is usually better than a cure’. She advises that companies conduct risk assessments and undertake routine checks of privacy and security settings on all digital platforms, and that businesses which hold personal data should comply at all times with their obligations under data protection legislation. ‘In the unfortunate event that material falls into the wrong hands’, Bradbury says, ‘having complied with the data protection legislation will help any business which is trying to defend itself reputationally for having “lost” personal data.’ Compliance, therefore, is key.
Given the increases in hacking by external parties, Nigel Tait, managing partner of Carter Ruck and head of the firm’s defamation and media law practice, reports that ‘the cost of insurance against hacking has almost doubled in the past five years’. However, training staff against potential phishing emails can be beneficial, and ‘lots of companies, solicitors and consultants now employ ethical hackers.’ An in-house hacker will test a business’ digital security systems and help to spot any potential weaknesses that can then be fixed. In this way, a company can harness for defence the kind of expertise from which it may be externally under threat.
In the event that a breach does occur, whether from within the business or without, Tait points to numerous ways to ‘put the genie back in the bottle’. Injunctive relief can be sought against a third party disseminating the information, and relief can also be sought under the Computer Misuse Act 1990 and via the help of the Information Commissioner. Tait says: ‘Google tends to respect the judgments of the English Courts and will act on them fairly quickly to deindex sites.’ Case law on the issue may also provide some reassurance, with the Supreme Court finding that the employer was not vicariously liable for unauthorised disclosure of data by an employee in WM Morrisons Supermarket v Various Claimants  UKSC 12. If it can be shown that a data breach committed by an employee was not within the scope of their employment – regardless of whether their employment gave them the opportunity to commit the breach – then liability does not fall upon the employer.
The dangers of social media and ‘fake news’
The dissemination of ‘fake news’ has been a highly topical issue in the last decade, and businesses will understandably fear the rapid and uncontrollable spread of reputationally harmful posts and tweets over social media. Tait, however, counsels calm. ‘One of the greatest challenges can be to convince the client to do nothing – if something is not proliferating it is better to let it die away, as court proceedings will bring public attention.’ The cost of a libel claim can run into the hundreds of thousands of pounds, with a lesser win in damages then resulting in a net loss. While larger companies can afford to go to court to defend their reputations, they must be aware of the added roadblock of s1(2) of the Defamation Act 2013, which requires them to prove they have suffered serious financial loss. What ‘serious financial loss’ actually means is an area relatively unexplored by the courts. In the case of Brett Wilson LLP v Persons Unknown  EWHC 2628 (QB), it was held that seriousness was defined in terms of context and expected loss was deemed enough to meet the threshold of serious loss. A large business, however, which expects to suffer loss which is a small proportion of its general profits, could run into difficulties at the hands of s.1(2).
While the laws of defamation are a useful tool, there are other ways for an in-house legal team to counter reputationally harmful material spreading online. As Louise Prince, senior associate at Harbottle & Lewis, suggests: ‘It is also worth checking whether there are any useful policies from the social media provider that can be deployed to get material down.’ Companies may also deploy a smokescreen strategy, pumping large amounts of news about the business onto Google or social media platforms to knock the harmful material down the search lists. Tait points out that maintaining dialogue with Google and other search engines is important, as it may be possible to secure the removal of a webpage without legal action. If a lawsuit does become necessary, malicious falsehood is a savvy option, as there is no need to show harm to reputation, and malice can be proved if it is shown that the maker of a statement was merely reckless as to its truth. If minded to bring a case in defamation, companies may decide to do so strategically. This could involve bringing legal proceedings with the aim of securing an apology, without necessarily having the intention of obtaining damages. Success in this instance can act as a warning shot, demonstrating readiness to litigate and acting as a deterrent to the future spreading of reputationally harmful material.
One major issue with social media, however, is that it is very easy to set up a fake identity online and to disseminate information anonymously. This can make it difficult to pinpoint where any reputationally harmful material is really coming from. Prince suggests: ‘An often successful and effective tool that can be used to unmask persons responsible for wrongdoing who are seeking to hide behind anonymous accounts is to apply to the UK Court for a Norwich Pharmacal Order.’ Such an order will compel an innocent third party who holds information about the actual identity behind an account to disclose this information and so enable proceedings to be brought.
The impact of artificial intelligence
Even more recent than the rise of social media is the dawn of high functioning and readily accessible artificial intelligence software. While some view AI as an enormous opportunity, a new technological leap forward, others are concerned by its potentially catastrophic impact on jobs and many even speculate on more existential, dystopian consequences. What is certain is that AI is set to play an ever-increasing part in our lives, and the realm of reputation management will not be exempt from this. While the risks and impacts of AI and machine learning are, for now, unclear, Prince comments that ‘the use of such modern technologies increases the risk that inaccurate information, which may also be defamatory, may now be more easily spread than before.’ It is certainly true that the ability of programs such as OpenAI’s ChatGPT to generate coherent text with a genuine feel could allow for imitation – for example, a statement which can be readily attributed to someone who did not make it. How far this could go is difficult to say, but it is clear that AI can now emulate a reliable human source. One suggested risk is that such programs may help to streamline phishing emails, making them sound credible and far harder to spot as a scam.
Aside from any malintent, it is now known that ChatGPT can ‘hallucinate’ – using its training to fill in gaps in the information which has been inputted. This results in the program being able to argue incorrect facts as if entirely plausible. As such, statements generated by AI may be unintentionally harmful to reputation, with the program generating false information of its own accord. In-house teams should be mindful of such risks and remain abreast of the latest information and guidance as it appears.
When the question of how AI may affect companies’ reputation management is put to ChatGPT itself, it warns: ‘AI algorithms can perpetuate bias and discrimination if not designed with fairness and ethics in mind. Companies that use AI in their reputation management must be careful to ensure that their algorithms do not unfairly target or discriminate against certain groups of people, as this can harm their reputation and lead to public backlash.’ However, it does also point out the abilities of AI to predict data breaches and to monitor social media in real time so that reputationally harmful material can be immediately addressed. AI is not just a possible threat to reputation management, then, but can be used as a tool to assist with it, even if it does say so itself.
Corporate social responsibility
In the current climate, where issues of sustainability and ethical corporate practices are a topic of public discussion, it is important for companies to manage their reputations by remaining transparent about their operations where possible. ‘The courts have pierced the corporate veil’, says Tait, ‘and have shown willingness to allow claims against parent companies regarding the actions of subsidiaries oversees’. Prince advises that governance policies are put in place to ensure that ‘employees and third parties (such as suppliers) know what the policies of the business are and will report any cause for concern immediately’. Acting to correct any potentially harmful practices or problems with products directly can help to avert or mitigate any adverse public attention. Employees, business partners and customers are also likely to feel more positive about a company which acts to rectify any potentially socially or environmentally irresponsible actions of its own volition, rather than waiting until its hand is forced by whistleblowing, the press or the courts.
In the digital age, it is important to consider that news about a company will spread quickly and spread far, and that, without a course of action, it will be available to read permanently. As Tait states: ‘What’s online is like a tattoo – it is important before making any decision as a business to ask: “What will the newspapers make of this? What will the public make of this? Is this the tattoo I want?’” Many companies will seek to embark on corporate social responsibility drives to establish a better public profile. Prince warns in-house teams, however, that before doing so they must ‘carry out proper due diligence as the media may, for example, try to look for inconsistencies to show that a business is simply paying lip service to an idea rather than fully endorsing it, which may be a story in itself’. It is crucial, therefore, for public commitments to corporate social responsibility initiatives to be thoroughly and tangibly followed through.