The Genie is out of the bottle, how do we live with her?

The recently released Digital Progress and Trends Report 2023 from the World Bank[1] is a fascinating window into what the bank calls the “new era of development” underscored by digitalization and the remarkable advances in Artificial Intelligence (AI). The authors posit that embracing digitalization is now a necessity for humanity since it “holds the foundation and potential to shape a more inclusive, resilient, and sustainable world for generations to come.”[2]

The accompanying image is AI generated; took less than a minute after I typed in parameters.  Digitalization has already reshaped our world.  AI development is achieving a level of sophistication “previously unimaginable” with the ability of large language models to interpret natural language prompts correctly and generate completely original text, audio, image, and video content that is indistinguishable from human made content.[3] New generative AI start-ups are entering the market at a “swift pace with content generation and generative AI gaining the most traction from investors.”[4] Equity funding in the first half of 2023 was five time that of the full year 2022. Essentially, the genie is out of the bottle and travelling at warp speed. Humanity’s task now is to figure out how to live with her.

Both a blessing and a curse

Unleashed, she is purportedly both a blessing and a curse. AI has the potential to accelerate productivity growth and deliver benefits for the global economy and society, but like other technologies, AI will affect people, firms, and countries differently.  For example, AI has the potential to respond to a range of development challenges. In developing countries farm productivity and food processing standards deficits often challenge food security objectives and limits access to high quality agri-food chains. Data and AI powered machines (e.g. self-driving tractors, robot swarms for crop inspection and autonomous sprayers) underscore smart agriculture concepts that guide action towards safeguarding or increasing agricultural productivity and food security under challenging conditions, and increasing expectations of transparency in the agri-food chain.[5]  Additionally, readily deployable AI solutions can increase the provision of modern healthcare in countries where medical skills and infrastructure are scarce and access in remote areas non-existent.  There are similarly transformative applications in education, energy, financial inclusion, climate resilience and public sector management.

The problem is that the progress and distributional impact of digitalization has been and continues to be materially uneven between developed and developing countries. AI could potentially widen that gap since digital technologies tend to give rise to natural monopolies, creating a small set of “superstar firms”, headquartered in a few “superstar countries” and reaping all the rents associated with the development of AI.[6] In fact a key takeaway from the World Bank report is that while AI can help developing countries tackle a range of development challenges, it could also inspire a deterioration of the terms of trade and devalue the comparative advantage of developing countries which is often abundant cheap labour and natural resources; and potentially reverse any convergence in the standards of living between rich and poor countries.[7]

Harnessing her power

At this point no one knows the actual effect AI will have on the world, but everyone agrees the world cannot afford to leave AI unchecked.  Given AI’s potential to evolve into a superintelligence capable of replacing humanity the world has to figure out how best to ensure the centrality of humans in a future subject to algorithmic conditioning. One could argue that the centrality of some humans is perhaps already prioritized, referring to those superstar firms in superstar countries, who own the technology, drive innovation, and therefore coopt the right to determine the effect, subject to the overriding requirement to maximize shareholder value. So, it is for those humans who fall outside the “superstar” realm to advocate for more equitable outcomes.

This means the global community must work together to shape the impact and direction of AI innovations, coordinate the pace and scale of their applications, monitor and assess impacts, and devise and deliver guardrails to ameliorate adverse effects and protect the vulnerable. This requires a globally accepted normative framework which promotes a sense of shared responsibility and ethical guidance for how AI is allowed to develop; and happily, there is one.

On 25 November 2021, all UNESCO member states (193 countries) voted to adopt the first ever global agreement on the Ethics of AI. The 28-page draft text[8] defines the common values and principles needed to ensure the “healthy” development of AI and pays specific attention to the broader ethical implications of AI on the central themes of UNESCO i.e. education, science, culture, and ICTs.  The text highlights the advantages of AI but sets out to reduce the risks implied in its development and to ensure that digital transformations promote human rights and achievement of the SDGs. The text speaks primarily to ten key principles including fairness and non-discrimination, proportionality and do no harm, sustainability, safety and security, transparency and accountability, awareness and literacy and is supported by substantive policy proposals on how to achieve them.  The framework is intended to guide signatories in the formulation of their domestic legislation, policies, and other instruments regarding the development of AI consistent with international law.  Notably the United States only recently returned to UNESCO in October of 2023, after leaving altogether in 2018.[9]  This means all the superstar states, especially the US[10] and China are subject to the agreement on the Ethics of AI.

On the domestic front, the World Bank reports that countries are currently adopting divergent approaches and priorities in AI governance.[11] The approaches are characterized using the concepts of hard law (specific and binding regulation and rules that establish concrete obligations and consequences for AI development), soft law (non-binding guidelines, principles and recommendations which lack legal enforceability), and self-regulation where industry stakeholders voluntarily set their own rules and standards for AI development and deployment.  Ethics and accountability in AI development are approached in three different ways according to the World Bank, i.e. risk-based, technology-specific regulatory, and responsible-use.  The risk-based approach involves categorizing AI applications based on their potential risks and impacts on individuals and society, such as that applied by the EU in its AI Act.  The technology specific regulatory approach involves tailoring regulations to address the unique characteristics and challenges of specific AI technologies, rather than applying general laws. Responsible use approach involves interpreting existing laws while complementing them with voluntary agreements and public-private partnerships to ensure a responsible and value aligned implementation of AI (UK approach).[12]  

The US had adopted what the bank calls a more diverse and flexible approach, using a combination of soft law, self-regulation, responsible use, and legislation at various levels within different domains.  There are federal initiatives including the AI Bill of Rights, AI risk management framework by the National Institute of Standards and Technology. Relevant federal agencies are formulating roadmaps and best practices for AI within their domains, which address potential discrimination and other issues arising from AI systems. States are proposing numerous laws related to AI use and protections including California’s Act which will allow citizens to opt out of AI systems, and New York City’s Transparency in AI Use during hiring processes and annual bias assessments.  The UK has recently (February 2024) unveiled its response to AI, adopting a principles based, cross-sector framework underpinned by core principles[13], but not yet codifying anything into law. Regulators will implement the framework in their domains by applying existing laws and issuing supplementary regulatory guidance. Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of regulators.[14] The aim is to balance innovation and safety by applying the existing technology-neutral regulatory framework to AI.[15]  Notably this approach contrasts with that of the EU and perhaps even the US which have embraced more prescriptive legislative measures.  China’s framework focuses on both hard and soft law, technology-specific AI regulations, encompassing draft rules for generative AI, guidelines for AI in various applications, and specific provisions regarding automated decision-making.[16]

Given their dominance, how the superstar countries choose to govern AI is instructive for developing countries, but not prescriptive. Developing countries will have to do their own work to develop regulatory structures consistent with international law but attentive to their development priorities. The World Bank Report discusses what some developing countries are doing to accelerate safe and inclusive AI adoption.  The key take-away for developing countries? It is urgent that they invest in digital infrastructure and prepare the workforce for the disruptions that AI may bring.[17]

Benefiting from her powers in the Caribbean

As governments across the world consider and adapt to the new possibilities offered by AI, I wondered if the Caribbean had a response. I stumbled across the UNDP’s Digital 4 Development LAC Hub and its 4 March 2024 blog,[18] titled “The AI Revolution is here: How Will Latin America and the Caribbean Respond? The UNDP discusses that the region is already lagging behind global adoption rates. The lower uptake reflects limited public investment in science and technology, insufficient skill levels necessary to embrace AI, and a highly informal economy dominated by small enterprises. Yet, despite low economic growth the LAC is a highly entrepreneurial region, standing out among other developing regions in the number of tech startups valued at more than US$1B (unicorns).  UNDP warns that if the LAC is to realize gains from the AI revolution in a way that is inclusive, ethical, and sustainable, it needs deliberate and urgent action toward a regional commitment to build a robust AI ecosystem. Regional cooperation it advises will ensure a system of support, an active sharing of infrastructure, experiences, and best practices, and bring a unified voice from the region to the global debate.  

Working toward a regional commitment is good advice.  As the Caribbean knows well, size constraints are effectively offset by cooperation. The fact is, no single small island Caribbean developing state can have as much impact and bargaining power in the global AI debate as all small island Caribbean developing states, and the Caribbean cannot afford to be on the periphery in this debate.  I can imagine a region where a common legal framework ensures that AI systems are safe, respect fundamental human rights and considers the region’s positioning at the sharp end of conservation, sustainability, and climate change. Where the private sector can look to each other to leverage AI in the agri-food supply chain to produce better outcomes for food security challenges, to feed the tourists coming to the region, and integration into globalized food value chains. Where AI is used to promote sound public sector governance and build resilient institutions at national and regional levels, and where a well-trained, resilient work force is prepared to face the disruptions and leverage the opportunities. 

At a country level we (yes, I am invested) must act urgently to understand in granular fashion how digitalization and AI can enhance comparative advantages, advance development priorities, and what guardrails and protections are needed. Countries will have to decide on the mix of AI governance approaches best suited to achieving optimal outcomes and make way for the values and principles agreed upon at a global level to ensure respect for the centrality of humans. At the same time, all countries must engage in deliberate and urgent action toward an integrated, robust regional AI ecosystem built up around common advantages, priorities, and desired outcomes.

I am optimistic that the region understands what it needs to do, and I know that it has the support it needs to do it. The most recent and compelling expression of support came from the IDB just about a week ago.  On 8 March 2024, the Caribbean Governors of the IDB endorsed the “One Caribbean” program; a comprehensive framework designed to support enhancing living standards across the region focusing on common challenges such as climate adaptation, disaster risk management and resilience, citizen security, private sector engagement and food security as well as institutional strengthening, and here it is . . . digital transformation.[19]  

Yet, the region is not known for acting with a sense of urgency and although recognizes the value of working together, lags compared to other well-integrated regions like the ECCU and EU.[20] So, can it urgently deliver the sort of robust regional AI ecosystem that both protects and benefits its people?  I think collectively we have the skill but am not sure we have the will.

I’d like to hear what you think, get in touch.

[1] https://openknowledge.worldbank.org/server/api/core/bitstreams/95fe55e9-f110-4ba8-933f-e65572e05395/content, downloaded 3-2-24.

[2] Foreword pg. xi.

[3] Ibid, pg. xx.

[4] Ibid.

[5] the EU’s Study on AI intelligence in the agri-good sector (March 2023) is an interesting read if you want to learn more,  see https://www.europarl.europa.eu/RegData/etudes/STUD/2023/734711/EPRS_STU(2023)734711_EN.pdf

[6] Ibid

[7]  See key messages, page 85.

[8] https://unesdoc.unesco.org/ark:/48223/pf0000377897.

[9] The US pulled out of UNESCO after the organization decided to admit Palestine in 2011. The Obama administration was forced to stop funding of the agency because it was barred by US law which forbid funding to UN bodies that admitted Palestine as a full member.  President Trump decided to leave altogether accusing the organization of anti-Israel bias; leaving the space for China to exert its influence. https://news.un.org/en/story/2023/06/1137577 .

[10] The Secretary of State Antony Blinken in acknowledging the US’s interest cited the first global standard on the ethics of AI as one of the more urgent reasons. https://news.un.org/en/story/2023/06/1137577 .

[11] https://openknowledge.worldbank.org/server/api/core/bitstreams/95fe55e9-f110-4ba8-933f-e65572e05395/content, downloaded 3-2-24, pg. 85

[12] Ibid pg. 97, 98

[13] i.e safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance and contestability and redress.

[14] Readers of my blogs will note the similar approach to electronic documents and paperless trade.

[15] https://www2.deloitte.com/uk/en/blog/emea-centre-for-regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html

[16] Ibid, pg. 97

[17] Ibid, pg. 85

[18] https://www.undp.org/latin-america/digitalhub4/blog/ai-revolution-here-how-will-latin-america-and-caribbean-respond

[19] https://www.iadb.org/en/news/idb-group-caribbean-governors-endorse-regional-program-one-caribbean

[20] https://www.imf.org/en/News/Articles/2020/02/04/NA020420-Strengthening-Caribbean-Regional-Integration, 2020.

Next
Next

Refreshing brand Belize