Project scoping

Is AI an appropriate tool to use?

Consider the following questions as a guide to thinking about whether an AI system is an appropriate solution for a project within government, and whether your agency is ready and has what is required to build or source and deploy a successful AI solution which adheres to the principles as outlined in the AI Policy:

 
Questions to ask Explanation

Will AI benefit NSW citizens?

AI systems can perform decision-making at considerable speed and scale, creating efficiencies and reducing costs.

However, as a primary consideration of value that an AI solution would bring, it needs to deliver a demonstrable and measurable community benefit or insight.

It also needs to be transparent, meaning that:

  • Project objectives are available to citizens and may be regarded as open access information
  • Citizens understand how their data is being used and are afforded an opportunity to provide feedback and ask questions on the AI solution
  • Insights into the data use and methodology are freely available on agency websites. 

Data: Is the relevant and appropriate data available?

An effective AI system needs the right data, in the right quantity and to the right standard. This includes:

  • Ensuring there is a dataset that is available and representative of the population
  • Designing a data model that mitigates bias and has a focus on diversity and inclusion so that the AI system does not result in unintended consequences
  • Regularly monitoring the data model and outputs
  • Ensuring that agencies build and maintain transparency in their dataset.

Context: how much contextual information is needed to assist the decision making that your AI system will be involved with?

Human decision making involves a combination of information or data and context. However, AI systems only understand objectives and concerns that have been explicitly programmed into them. They do not operate well when a high degree of contextual understanding is required.

An example can be seen in self-driving cars where the AI needs to be aware of the human context of making decisions while performing the task of driving, such as awareness of road conditions, the state of passengers and pedestrians, dangerous situations etc.

AI system (rules; algorithms): You will need to make choices about the rules on how your AI system will learn from the data you are providing, to produce outputs/make predictions/classify items etc. What choices will you make, and who will be involved in making these choices?

A deeper consideration of how your AI system is built will be made in the planning and design stages of your AI solution development.

At the scoping stage it is important to be mindful that choices will need to be made on the way your AI system rules are designed around:

  • Accuracy: how close an answer is to the correct value
  • Precision: how specific or detailed an answer is
  • Sensitivity: the measure of how many actually positive results are correctly identified as such (eg the percentage of sick people who an AI system correctly identified to be sick)
  • Specificity: the measure of how many actually negative results are correctly identified by the AI system
  • Methodology: AI systems should also be able to quantify unintended consequences, secondary harms or benefits, or long-term impacts to the community.

Does the agency have the expertise and budget to build, and maintain oversight of, the AI system?

To address the range of choices your team will need to  make in developing an AI solution, you will need to include a diverse range of subject-matter and technical specialists including advisers on privacy, legislation and policy, programmers, and data scientists. Projects may also require input from user experience/interface designers, users of the solution, and people impacted by decisions that the AI system will help to make.

Agencies remain responsible for AI-generated recommendations. This includes any agreements you have with external parties. This means that agencies need ongoing capability to:

  • Oversee the operation of the AI system (AI systems can degrade over time)
  • Continually monitor the system’s behaviour, review its performance, and respond to any changes in behaviour (it will only detect problems that are anticipated by its designers)
  • Audit, explain, and be accountable for the AI system’s decisions
  • Provide for an independent review and certification process on the AI solution design, operation and impact.

This oversight is vital to detecting and correcting problems with the system’s operation.

The design and implementation of AI systems requires both upfront and longer-term investment in staff training, engagement of AI consultants, and IT infrastructure.

Is there a review mechanism?

Agencies need to ensure that decision-making which has included the use of AI systems:

  • is clear and accessible to internal and independent reviewers of the AI system, and to users and the people impacted by those decisions, and that
  • recourse to a review of those decisions is available through a transparent review process.

What AI solutions are currently available that may suit your needs?

Before you invest in building an AI solution for your needs, what other, similar systems can you learn from that have been deployed elsewhere?

If you are thinking about sourcing an “off the shelf” AI solution, what can you find out about how it functions? For example:

  • What data sources were used to train the AI?
  • Are the results from the AI system repeatable?
  • Are you able to check whether the AI system is functioning as it is being described to you by the prospective vendor or current user?

Project scoping tools and useful resources

In the first stage of the process to develop an AI solution, considering responses to each of the above questions will enable teams to start forming an appropriate scope and overview of any proposed AI project.  Taking the time now to work through these questions will help with later development of the AI system planning and design specifications.

The Benefits Realisation Management Framework and related tools/templates, and Lean Business Canvas template, are good examples of existing tools and resources which can further assist teams pull together ideas to develop a clear problem statement rather than detailing specifics of a solution too early in the process.

Benefits Realisation Management Framework and tools/templates

With guidance from the Benefits Realisation Framework, teams will be able to define the vision, objectives and potential benefits of the project, how these align with key strategic drivers for your organisation and for NSW Government overall, as well as understanding any potential risks and possible mitigation strategies that need to be in place.

Initial thinking about data sources, contextual issues, AI systems rules, resourcing and investment can also be addressed.

Benefits Realisation Management Framework

Benefits Management Plan

Benefits pathway (three column analysis)

 

Lean Business Canvas

Using a Lean Business Canvas template (attachment A) is a way to create a quick snapshot or scope of your initial idea, share with others for feedback and use as a basis for planning and designing your project. It prompts people to focus on identifying what are the problems, possible solutions, key metrics to measure success, and what are the benefits, outcomes and values to be realised.

On Outcomes: The NSW Human Services Outcomes Framework

When thinking about how to define benefits to the community in your deployment of AI, consider the example of the NSW Human Services Outcomes Framework, which is a cross-agency framework that sets out seven wellbeing outcomes for the NSW population: safety, home, economic, health, education and skills, social and community, and empowerment. This Framework provides a way to understand and measure the extent to which a project or an agency can make long-term positive differences and benefit to people’s lives and build evidence of what works to improve wellbeing.

Last updated