Supporting capabilities

Business functions and controls like procurement, cybersecurity, and staff training are needed to cover a host of different areas, including artificial intelligence.

Procurement

If your organisation is looking to procure an AI system, you will want to plan appropriately for the size and complexity of the project.

As with any procurement – it is important to be clear on business needs and conduct thorough market research on suppliers and their offerings. Consider (and document as relevant):

  • customer needs and benefit to your customers
  • business needs and the purpose the AI system needs to fulfil
  • the costs, risks and benefits of using an AI system to fulfil that purpose
  • evaluation criteria for successful procurement.

When assessing AI solution/s to use, consider requesting a trial (isolated from other technical systems) to figure out if the system is right for your organisation. Seek to understand and clarify the items outlined in the Artificial Intelligence Procurement checklist.

Remember and consider that other jurisdictional laws may be different to those in New Zealand – for example requiring providers and vendors to on-share your data.

The AI Forum New Zealand also has AI Procurement Guides available to support vendor and product assessment. AI model or service cards, and/or third-party responsible AI assessment services can provide information to help determine whether a product is the right fit.

AI Procurement guides(external link) — AI Forum New Zealand

IT and cybersecurity

The integrity and protection of all systems and datasets related to AI system operations is essential for their effective and uninterrupted operations. This includes assessing and navigating security vulnerabilities, jurisdictional risk (where data is stored or processed outside of New Zealand, and subject to another country’s laws), and privacy protection where personal information is involved.

Early discussions with IT security teams help business decision-makers establish level of preparedness, and conduct a security risk assessment to determine whether the AI system is appropriate.

A secure-by-design approach can support physical and digital resilience, protecting against attack and enabling reliability and consistency in system operations.

Joint guidance: Principles for security-by-design and default(external link) — Government Communications Security Bureau

Digital datasets and systems, including AI systems, can be exploited by malicious actors. Attackers could look to harm AI system integrity (to deceive users) or availability (disrupting its use), or to compromise the confidentiality of the training data, intellectual property, or input data.

Security risk of any system depends on several factors, including:

  • the information the system has access to
  • permitted users
  • whether it was developed in-house or procured from a third party
  • external data sharing
  • attacker motivation for disruption or interference.

Any technology can contain vulnerabilities, which could be compromised and exploited by malicious actors for various reasons. Without careful consideration and management, vulnerabilities can lead to information leaks or unauthorised disclosure, ‘poisoning’ (of the training dataset), injection attacks, and other attacks.

In line with the National Cyber Security Centre’s Cyber Security Framework, businesses should consider taking steps to ensure any security or privacy breaches can be noticed, contained, assessed, and responded to quickly, to mitigate any potential harm to individuals or company intellectual property and comply with Privacy Act obligations where necessary (including notification). This includes forming an incident response plan, as well as monitoring system behaviours and inputs for security risks, and ensuring clear and effective feedback loops for reporting of any system vulnerabilities and risks.

NCSC Cyber Security Framework(external link) — Government Communications Security Bureau

New Zealand has resources available for cybersecurity practitioners to understand good practice in AI cybersecurity particularly, including guidance that New Zealand’s National Cyber Security Centre has developed jointly with international partners on:

Additionally, an informational video series has been developed to support general online security for businesses

Unmask Cyber Crime(external link) — own your online

The Government Chief Digital Office also provides guidance to support government agencies’ jurisdictional risk assessment, which can also be useful for private sector consideration. 

* Given its expanded attack surface and often opaque processes, GenAI can also bring some additional cybersecurity risks (see Skills and knowledge building and use and outputs for tips to mitigate these at point of use):

  • * GenAI systems can leak secure information if supplied with certain prompts or other data. For example, Large Language Models have been known to suggest real passwords and API keys found in training data.
  • * GenAI outputs can be a security risk, for example if used for code generation. Verify that any generated code is sufficiently trustworthy and free of errors with quality control processes.
  • * GenAI prompt injection manipulates AI model behaviour with specifically crafted instructions.
  • * GenAI models can be more susceptible to data poisoning, given the quantity of data they are trained on, and that this is often public. Extra care is needed to ensure retrieval repositories for Retrieval Augmented Generation are protected.
  • * GenAI models require more compute and so can be particularly susceptible to denial of service attacks.
  • * There are also related sociotechnical risks to information security (further elaborated on in Use and outputs.

Privacy

As well as protecting systems and data (including personal information) from cyber threats, businesses also need to consider responsible and lawful management of data more generally.

Legally, businesses need a privacy officer appointed if dealing with any personal information (including collecting, use, or storage). Details on that role are available from the Office of the Privacy Commissioner.

Compliance and legal obligations

Information for Privacy officers(external link) - Privacy Commissioner

Artificial Intelligence can exacerbate privacy risks, and a privacy-by-design approach can be valuable to help build in privacy protection to information systems, business processes, products and services from the start. Privacy is often also considered as part of processes around risk management.

As a starting point, it is important that data is classified appropriately so you know what is personal information and should be treated appropriately.

More information on how to support privacy protection as part of building and using AI models is included in AI specific system considerations.

Skills and knowledge building

Staff need the capability to competently perform their roles and uphold responsible AI practices as required. Growing AI literacy within a business is important.

To support this, businesses can:

  • document what competencies different roles or groups of staff require (in relation to their involvement with AI system development or operations and risk management practices and policies). Consider:
    • foundational responsible AI training/education for all staff, including understanding the fundamentals of AI, using GenAI responsibly, what are allowed business uses, and its limitations
    • tailored training for those developing or governing AI as part of their roles (see Assembling a team), including on how to mitigate AI system bias and evaluate AI outputs
    • training for end users and other operators.
  • put in place regular training for staff. This could cover technical education as required but also upskilling around other areas of risk that have been identified (e.g. ethical obligations, privacy, cybersecurity, intellectual property). It may also cover incentives for complying (or consequences for non-compliance) with defined policies (including AI usage policies and/or standards) or procedures.
  • adapting and improving training programmes based on participant feedback where possible. A clear mechanism for continuous feedback on training, experiences deploying training, or gaps that need to be filled will help with understanding issues as they present themselves and identify improvements.
  • partner with (AI) specialists to help bridge any skills gaps.
  • join or leverage collaborative initiatives to share experiences, tools, and practices, including for ensuring responsible AI use and/or development. The OECD has catalogued tools and metrics for trustworthy AI (such as technical tools to remove bias, audit AI systems, or measure fairness), which can be browsed, filtered or searched as required.

    Catalogue of tools and metrics for trustworthy AI(external link) — OECD.AI

* To embrace the opportunity GenAI offers in a safe and responsible way, businesses can build skills, capability and diversity across teams, including through education (see Other artificial intelligence guidance and resources) on:

  • * strengths and weaknesses of GenAI
  • * how GenAI tools work, and how to use them most effectively (especially with sensitive information) including through prompt engineering (see Use and outputs)
  • * when GenAI can augment aspects of human work and/or decision-making and where AI may be able to perform tasks with minimal human involvement (see Human-in-the-loop decision-making).