AI – Directors’ Duties

 

Introduction

As Artificial Intelligence (AI) becomes increasingly prevalent across industries in Australia, directors and officers face heightened risks and responsibilities. Over two thirds of Australian business makers currently use or plan to use AI in the course of business[1]. With AI now integrated into many business operations, such as the Commonwealth Bank of Australia’s use of AI for detecting suspicious banking activity[2], directors must be more vigilant to ensure they are aware of how AI is being implemented in their companies’ businesses, and the effects that may flow from that.

Although there are currently no specific AI laws in place, there are existing regulatory frameworks that arguably govern the scope of company directors’ duties with respect to AI use, and therefore must be given due attention by directors to ensure they are complying with their duties.

What is AI?

AI refers to the simulation of human intelligence in machines that are programmed to think and learn. AI systems can analyse data, recognise patterns, and make decisions with minimal human intervention. In business they may be used for automation such as data entry and analysis, providing personalised recommendations based on user behaviour, improving inventory management through predictive analytics, and identifying unusual patterns in transactions to prevent fraud.

While a hugely powerful and useful tool, AI systems could pose real risks to individuals, groups, and society at large, in areas such as human rights, health and safety, and law, and the magnitude of these potential negative effects needs to be considered.

One way for company directors to manage and identify risk is to ensure any use of AI complies with the existing statutory regime regarding directors’ duties.

The risks: Directors’ Duties under the Corporations Act.

Chapter 2D of the Corporations Act 2001 (Cth) (Act) imposes legal obligations on company directors and officers. According to ASIC, these duties likely extend to the company’s use of AI and related cybersecurity governance[3]. Two key provisions under the Act, section 180 and section 181, are particularly relevant for directors in relation to ensuring any use of AI complies with their legal obligations.

Section 180 requires directors to exercise reasonable care and diligence in the performance of their duties, while section 181 mandates that they act in the best interests of the company. This includes considering the interests of stakeholders and maintaining the company’s corporate reputation. Effectively managing reputational risks is especially important when assessing the company’s long-term interests, as it is closely tied to safeguarding shareholder value[4].

What are the risks?.

The greatest risk for directors of using AI within a company is the failure to understand how the AI systems work. For instance, if AI generates unreliable information that a company director relies on, it could lead to poor corporate decision-making. This may put the director at risk of breaching their duties under the Act, as the Act demands the exercise of reasonable care and diligence.[5]

In Australian Securities and Investments Commission v Cassimatis (No 8) (‘Cassimatis’)[6], Edelman J clarified that harm should not be limited to financial damage alone but can also encompass reputational damage. Therefore, companies using AI systems should have proper measures in place to manage associated risks including any reputational damage that might inure to a company by virtue of its use of AI.[7] Such measures may include appropriate monitoring of AI systems and regular checks to ensure the integrity of the data being produced.

Directors may also be at risk of contravening their duties if they allow for the implementation of AI systems that do not serve the company’s best interests. The Act arguably extends to accommodate evolving governance expectations including AI governance, ensuring that the implementation is consistent with the best interests of the company. Therefore, directors must evaluate how AI is integrated into their business model and its potential adverse effects on various stakeholders, such as customers. In response, directors should implement appropriate organisational AI governance frameworks to address these risks effectively.[8] 

Ensuring Effective AI Governance

Good AI governance has the potential to accelerate the growth of a company. Directors should effectively oversee the implementation of a good AI corporate governance framework within their company[9]. Firstly, directors should implement guidelines as to what the company considers to be AI, and how this AI is to operate within the business. Directors should also be aware of the regulatory landscape surrounding AI globally[10] to ensure that any prospective compliance obligations are identified. For example, the European Parliament has made it obligatory for high-risk AI system users to maintain use logs and to ensure human oversight[11]. To discharge their duties, directors must ensure that practices are implemented in view of minimising harm to the company[12], and this can arguably be extended to a company’s use of AI. This consists of an ongoing obligation to review and update the governance framework on a regular basis[13]. As AI and the surrounding legal and regulatory landscape is constantly changing and developing, the Company’s governance framework may need to adapt to reflect these changes.

On an international scale, the International Organisation for Standardisation has developed key standards to cover the governance and management of AI[14]. These standards highlight that directors personally have roles and responsibilities in making decisions about AI within the Company.

Members of the governing body are… accountable for the decisions made throughout the organisation, including those that are made through the use of AI and for the adequacy of governance and controls where AI is being deployed. They are thus accountable for the use of AI considered acceptable by the organisation.

What should directors do to minimise those risks?

On 5 September 2024, the Australian Government Department of Industry, Science and Resources published an article proposing 10 mandatory “Guardrails” for safe and responsible AI use in high-risk settings[15]. Those guardrails are:

  1. Implement accountability processes including governance, strategy, and internal capability for regulatory compliance in the deployment of high-risk AI systems.
  1. Establish risk management strategies to identify and mitigate AI risks.
  1. Have data governance measures that manage data quality and authenticity.
  1. Utilise systems to test AI models to ensure performance quality and to monitor the AI system once it is deployed.
  1. Ensure human control and intervention of an AI system is available. This will achieve meaningful human oversight.
  1. Ensure that end users are informed of AI-enabled decisions, and when they are interacting with AI-generated content.
  1. Establish processes for people who are negatively impacted by high-risk AI systems to complain or contest about AI-enabled decisions.
  1. Foster transparency and trust with end-users, and inform users about AI-enabled decisions, interactions, and content generation to build confidence in the responsible use of AI.
  1. Maintain clear records of AI systems and compliance to support accountability and allow for third-party assessment.
  1. Conduct conformity assessments to show compliance with the guardrails.

These measures are designed to complement existing legal frameworks, and to align the Australian regulatory landscape with international developments in AI regulation. Although compliance with these “guardrails” is by no means intended to guarantee that directors’ will have complied with their duties under the Act, it’s not a bad place to start in the absence of any guidance from the court (for now).

Key Takeaways:

  • Transparency is key to responsible AI use for directors. To maximise the prospects of complying with the Act while this area of the law is evolving in response to rapid technological change, directors should ensure that they have a solid understanding of AI and how it is used in their companies.
  • Have key risk management systems in place to identify potential hazards and determine the most effective mitigation strategies.
  • Accurately define the AI tools used and the risks associated with their use. Internal and external communication of AI capabilities and the nature of its use should be consistent.
  • If necessary, engage external AI experts to assist with the implementation of appropriate AI governance strategies.

Queries

If you have any questions about this article, please get in touch with the authors or any member of our Litigation & Dispute Resolution team.

Disclaimer

This information is general in nature. It is intended to express the state of affairs as of the date of publication. It does not constitute legal or financial advice. If you are concerned about any topic covered, we recommend that you seek your own specific legal and financial advice before taking any action.


[1] Solomon, L. and Davis, N. (2024) Report launch: The state of AI governance in Australia, University of Technology Sydney. Available at: https://www.uts.edu.au/human-technology-institute/news/report-launch-state-ai-governance-australia (Accessed: 12 September 2024).

[2] CBA introduces leading AI technology to protect more customers from scams. Available at: https://www.commbank.com.au/articles/newsroom/2022/07/scams-fraud-artificial-intelligence.html

[3] (2024) We’re not there yet: Current regulation around AI may not be sufficient. Available at: https://asic.gov.au/about-asic/news-centre/speeches/we-re-not-there-yet-current-regulation-around-ai-may-not-be-sufficient/

[4] Australian Securities and Investments Commission v Cassimatis (No 8) (‘Cassimatis’) [2016] FCA 1023 [483]

[5] Australian Securities and Investments Commission v Koffel [2012] HCA 17

[6] [2016] FCA 1023

[7] Report Launch (n 1), p 36.

[8] Bret Walker and Gerald NG, Australian Institute of Company Directors The Content of Directors’ “Best Interest” Duty (Memorandum of Advice, 24 February 2022), p4

[9] Artificial Intelligence: What directors need to know Artificial Intelligence Accelerate Responsibly. Available at: https://www.pwc.com.au/pdf/artificial-intelligence-what-directors-need-to-know.pdf (Accessed: 24 September 2024).

[10] European Parliament, ‘Press room’, AI Act: a step closer to the first rules on Artificial Intelligence Act (Web Page, 11 May 2023)

[11] European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))

[12] Australian Securities and Investments Commission v RI Advice Group Pty Ltd [2022] FCA 496

[13] Ibid

[14] ISO/IEC 38507:2022

[15] Department of Industry Science and Resources (2024) The 10 guardrails, Voluntary AI Safety Standard | Department of Industry Science and Resources. Available at: https://www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails