Skip to content

Algorithm Charter CoP: how mature are we?

measuring tape

For our first hui of 2024, the Algorithm Charter Community of Practice met in March to share our experiences of the last quarter’s developments and to discuss how we might meaningfully integrate the Charter’s commitments into our organisations.

We met at the Ministry of Business, Innovation & Employment office in Wellington and after some technical difficulties with video conferencing – the comedy of which was not lost on the technology adjacent attendees – Elias Wyber from the Ministry for the Environment introduced the maturity model he has been working on since the last hui in December. He used the Charter content to populate the draft model and after a small amount of in-hui tweaking, we accepted it as a very useful tool. The model will help Community members determine where their organisations are in relation to the different aspects of algorithm safety maturity.

We’ve acknowledged in the past that one of the difficulties for our members is knowing who might be further along in the process that they could approach for advice. We quickly recognised that another use for this model would be as a mapping device: if all agencies represented in the CoP could map their maturity levels, it would help members to determine who may have the required expertise to help lift them from one level of maturity to the next. Before the next hui, members will be asked to assess their agencies’ levels of maturity against this model, so we can begin to collaborate more deliberately.

We also enjoyed a presentation from Dr Andrew Chen (Chief Advisor: Technology Assurance, New Zealand Police), who discussed the utility of moving away from thinking about artificial intelligence (AI) and algorithms in terms of bias (often negative) and instead about safety and fairness. Andrew has been thinking and writing about ethics and technology for many years, including the foreword to, and editing of, “Shouting Zeros and Ones” (which has been followed up by “More Zeros and Ones”).

In his presentation, Andrew explained that most people have an internal bias that biases them toward thinking biases are a bad thing! He demonstrated that a more useful way to think of a bias is as a deviation from an equal outcome, and that some biases can be positive if the intended result is for an equitable outcome. Andrew also helped to illustrate what AI is really doing ‘under the hood’ that makes it tricky to utilise safely and helped us to consider ways in which we can build towards fairer and safer systems.

As we concluded we reminded everyone about the Algorithm Impact Assessment (AIA) tools, and all members were invited to find opportunities to use them in their agency in preparation for the second hui of 2024.

Photo by Immo Wegmann on Unsplash

Back

Comments

No one has commented on this page yet.

Post your comment


Top