AI’s future Is happening now. Will philanthropy embrace it?

 

Sevda Kilicalp

0

AI has recently achieved advancements that feel straight out of science fiction. A prime example is the ‘Holodeck,’ developed by the University of Pennsylvania and the Allen Institute for Artificial Intelligence. Inspired by Star Trek, the Holodeck uses large language models (LLMs) to create immersive 3D environments where AI systems learn by navigating complex, real-world scenarios, significantly improving their training and enhancing its effectiveness in applications like robotics.

In healthcare, AI’s predictive capabilities seem equally futuristic, with systems now able to forecast Alzheimer’s disease up to seven years in advance and diagnose autism in children under the age of two with nearly 80% accuracy. These breakthroughs offer new hope for early intervention and prevention.

In agriculture, AI-powered image recognition tools are saving Tanzanian farmers millions by identifying pest-infested crops.

AI’s potential is also evident in tackling climate change. In California, AI systems enable firefighters to act swiftly, preventing fires from spreading uncontrollably by predicting and responding to wildfires, analysing environmental factors like vegetation dryness and wind conditions within seconds.

Does this future excite you, offering the promise of powerful solutions to some of the world’s greatest challenges? Or do these rapid advancements feel distant, almost otherworldly, making you hesitant to engage? Either way, AI is becoming an integral part of our world—transforming it in ways we are only beginning to understand.

What’s in it for philanthropy?

At the Philanthropy for Better Cities Forum 2024, we discussed the transformative impact of AI, with some of the panellists emphasising its potential while others focused on the associated risks and ethical challenges. Despite differing views, we all agreed this is such a key moment for the sector, comparing its impact to the industrial revolution. No major societal actor can afford to ignore AI if they wish to remain relevant. As AI becomes more accessible, philanthropy must take the lead in fostering responsible experimentation and shaping policies to ensure AI benefits society while addressing its risks.

Having recognised the awe-inspiring capabilities of AI, the next step for philanthropies is to leverage these technologies responsibly. The rapid advancement of AI presents incredible opportunities that go well beyond mere productivity tools, yet it also introduces complex issues. So, how can philanthropies effectively position themselves to benefit from AI while managing these issues?

Preparing for AI adoption

To integrate AI effectively into their operations, philanthropies must first educate their staff and board about the opportunities and risks associated with AI. Open discussions on responsible AI practices can assist organisations in defining clear guidelines for ethical use, thereby preventing misuse or inadvertent harm.

Beyond internal education, organisations should carefully assess AI vendors to ensure that ethical standards and transparency are embedded in the technology they adopt. AI systems must be consistently monitored to ensure they adhere to these ethical frameworks, with an emphasis on detecting biases and maintaining data privacy. Continuous refinement is essential because AI algorithms are not fixed; they often operate as ‘black boxes,’ making their decision-making processes opaque. Regularly refining these algorithms ensures they remain effective and aligned with ethical standards, allowing for adjustments based on new data, feedback, and evolving use cases.

Supporting partners in the AI ecosystem

Philanthropies also have a key role in empowering their partners to adopt AI responsibly. They can assist in developing AI policies within these organisations by providing templates or frameworks that consider ethical and privacy concerns—ensuring this process is collaborative rather than top-down. Facilitating workshops, creating sandbox environments for experimentation, and fostering partnerships with academic institutions can equip smaller organisations with the tools they need to use AI effectively.

Given that discussions around AI are often dominated by tech giants, philanthropies should actively promote forums that amplify voices from underrepresented regions and communities to ensure that the development and deployment of AI are more equitable and inclusive. Additionally, funding research that explores the societal implications of AI and advocates for responsible use can contribute to fostering technologies that are both impactful and just.

The role of AI in monitoring, evaluation, and learning (MEL)

In our impact discussions, we not only consider creating greater impact through the inclusion of AI in our interventions but also explore how AI can help us understand our impact better. This presents an interesting opportunity, particularly for foundations that collect vast amounts of data from their grantees but often lack the time and resources to analyse it effectively. For years, we have been discussing ways to streamline this process, making it less burdensome while enabling us to learn more from existing data.

AI, particularly Natural Language Processing (NLP) and Generative AI (GenAI), has the potential to revolutionise how organisations measure and assess impact. These tools can automate data processing on a large scale, reviewing and synthesising reports, evaluations, and other qualitative data far more quickly than human analysts ever could.

For instance, NLP tools can categorise and extract themes from extensive grantee reports, allowing organisations to quickly identify trends and insights. AI can also streamline grant reviews by automatically assessing how well applications align with funding priorities. These time-saving processes enable organisations to focus on higher-level strategic decision-making.

However, AI-driven MEL activities come with their own set of risks. There are concerns that by automating these analyses, individuals may become disconnected from the data, losing valuable insights that only human interpretation can provide. Additionally, AI systems often lack context awareness, which can lead to misinterpretations or oversights that might be crucial for understanding the nuances of the data. Human oversight will remain indispensable in mitigating errors or inaccuracies generated by AI systems and ensuring that the analysis retains the necessary contextual understanding.

AI’s double-edged sword

No discussion of AI would be complete without addressing the risks involved. AI algorithms can perpetuate and amplify existing biases, especially if trained on biased data. These systems can also expose sensitive information, leading to privacy violations. Moreover, unequal access to AI technologies risks exacerbating the digital divide, leaving marginalised communities further behind.

The environmental impact of AI is another concern. Large-scale AI models require substantial energy resources, potentially conflicting with sustainability goals. These challenges make it clear that AI is not just a technological issue but also a societal one that requires careful governance.

A balancing act, that’s we need

AI, with all its remarkable potential, generates a sense of wonder, as though we are living in the pages of a science fiction novel. From AI-enhanced healthcare solutions that predict diseases before symptoms arise to systems that optimise energy consumption and reduce waste, the technology is undeniably transformative. But alongside the excitement comes a responsibility—especially for philanthropies—to guide AI development in ways that are ethical, inclusive, and sustainable.

Philanthropies have a unique opportunity to influence how AI is adopted across various sectors. By educating staff, supporting partners, and advocating for responsible AI practices, they can ensure that this powerful technology benefits all of society, not just a privileged few. Balancing the awe of AI’s potential with a deep sense of responsibility will be critical in shaping a future where technology serves humanity, rather than the other way around.

Sevda Kilicalp, Head of Research and Learning, Philea


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *