The Three Ds: Analyzing the Implications of Data, Documentation, and Decision-Making in Climate-Driven Artificial Intelligence
Artificial Intelligence (AI) stands out as a prevailing buzzword of this era. It is everywhere we live, work, and interact - from the advertisements on our screens to the technology in our homes and in integral services like those provided by banks, airports, and hospitals. At the basis of AI is algorithms, a systematic set of instructions or rules used to solve problems. Focusing on efforts related to climate work, algorithm application can be applied to predicting temperature changes, weather events, future deforestation, and carbon emissions. It can be used to show the effects of extreme weather, potential benefits of carbon capture and regenerative agriculture, and even nudge the general public to pursue climate-friendly ways of changing habits and behaviors (Coeckelbergh).
Algorithms, however, can lead to biased decision-making. There is plenty of evidence that exists to prove biases in algorithms, including research on algorithms detecting skin cancer that was only effective on light skin tones because of a non-demographically diverse dataset (Calderon) or Amazon’s hiring algorithm exhibiting gender bias when designed to review job applicants, favoring male candidates over female candidates because the training data reflected a male-dominated workforce (Dastin). Algorithmic-generated content needs to be critiqued and reviewed through the lens of auditing. Algorithmic auditing is a crucial way to address the challenges associated with the increasing use of algorithms and dependence of them on our decision-making. In order to audit algorithms effectively, we need to question the basis of AI before allowing it to drive our human-based decisions (whether we consider AI to be “moral” decision-making is a whole other topic that I won’t get into today). When considering environmentally-focused algorithmic decision-making, I think it’s crucial to contemplate three aspects of algorithm creation: Data, Documentation, and Decision-makers (3Ds).
Data, We’ve Got a Problem
As a Data Scientist, I interact with new and evolving datasets just about every day. This data is sourced from a variety of topics and contributors, and in the work that I do, I’ve been taught to take a deeper look, ask questions, and consider the biases that went into the data. According to Jennifer Logg, an Assistant Professor of Management at Georgetown University, “….algorithms can efficiently compound bias that is present in the input data. An algorithm will magnify any patterns in the input data, so if bias is present, the algorithm will also magnify that bias, “(Rock). Climate justice relies heavily on accurate data representation to inform policies and decision-making processes. When biased data becomes the inputs to algorithms that shape these decisions, it can result in disproportionate impacts on marginalized communities. If these biases are not properly addressed during algorithmic analysis, it can lead to the reinforcement of existing disparities. For example, an algorithm that relies on biased pollution data might misallocate resources in its decision-making, leaving already vulnerable communities without protection. To achieve environmental justice in AI-driven climate work, we need to acknowledge, scrutinize, and correct biases in the data used by algorithms. By prioritizing fairness in data analysis, we can build algorithms that contribute to equitable environmental decisions, policies, and practices.
Document! Document! Document!
Similar to the creation of datasheets for datasets, algorithm-driven decisions need to have documentation formulating the inner workings of the algorithm, including its design, logic, and decision-making processes. This transparency can allow auditors and stakeholders to understand how the algorithm operates, which is essential for assessing its fairness, accuracy, and potential biases. I believe documentation in the form of some sort of watermarking system, could allow users to have transparency and future trust in the algorithms they are interacting with. Thorough documentation, in general, contributes to better transparency, reproducibility, and accountability of algorithmic systems, making the auditing process more effective and reliable. With watermarking systems, we can ensure that auditors have “checked” off the necessary information for assessing an algorithm’s performance, and if unable to watermark for approval, could identify potential issues, and make informed recommendations for improvement.
Who Decides?
Effective auditing (as well as creation) of algorithms is reliant on informed decision-makers who understand the nuances of ethical data and decision-making. As Coecklebergh states in AI for Climate, “….those who develop and use AI have a special (in the sense of “specific”) responsibility. To make sure that AI leads to a greener and more climate friendly world is definitely also the responsibility of computer scientists, engineers, designers, managers, investors, and others involved in, managing, and promoting, AI and data science practices,” (Coeckelbergh). We also need to bring to the table decision-makers that are going to make fair decisions and keep the general public in mind. Consider Google’s recent creation of their Advanced Technology Ethics Advisory council, which aimed to advise on the company’s usage of AI. “….they were not transparent about their roles, responsibilities, and authority. Rather than engage affected communities, Google appointed a Council member who opposed LGBT rights. Google’s approach to oversight fostered distrust and protests, and the Council was dissolved,” (Calderon). In my opinion, human intervention will always be needed to create checks and balances with any form of AI that is driving our decisions, behaviors, and analyses. In the realm of climate justice, where algorithmic systems can impact something like policy formation, knowledgeable, fair, and adequately represented decision-makers are crucial to the formation and usage of AI in climate work.
AI, Don’t Worry We Still Love You
To conclude, in order to promote ethical data usage and responsible AI application we must safeguard future use of AI in environmental justice by taking a fine-tooth comb to the 3Ds (Data, Documentation, and Decision-makers). As Jennifer Logg so eloquently stated, “Trashing the mirror does not heal the bruise, but it could prolong the time it takes to fix the problem and detect future ones,”(Rock). In an era where algorithms increasingly influence critical aspects of our lives, of indigenous communities, of nature, and of our planet, we need to ensure these rapidly emerging systems undergo rigorous scrutiny and auditing. If we implement the three D’s to algorithm creation and usage, and even consider a “stamp of approval”, we can have some level of a “digital signature”, attesting to the legitimacy and ethical compliance of the underlying processes. This stamp of approval could then be implemented in policies and compliance for future creation and usage of AI in not just climate application, but any field.
Sources:
Calderon, A., Taber, D., Qu, H., Wen, J., et al. (2019). AI Blindspot Cards. Retrieved from www.aiblindspot.com (Version 1.1).
Coeckelbergh, M. (2020). “AI for climate: freedom, justice, and other ethical and political challenges.” AI and Ethics, Pg 1-6.
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Rock, D., Whittlestone, J., & Garrett, N. (2019, August 7). Using Algorithms to Understand the Biases in Your Organization. Harvard Business Review. Retrieved from https://hbr.org/2019/08/using-algorithms-to-understand-the-biases-in-your-organization
Smith, J. (2022, March 15). How AI Can Help Tackle Climate Change. Techopedia, https://www.techopedia.com/how-ai-can-help-tackle-climate-change/2/33622