David Scott Krueger
Assistant professor in robust, reasoning and responsible AI
University of Montreal
When discussing who’s accountable for AI no one mentions the investors financing this technology. AI systems are shaped by capital as well as engineers and regulators. Investors determine which applications are prioritised, which systems scale, which innovations reach the market, and which ones are shut out.
Too few market-moving investors have taken a stance on AI governance. One consequence is that Big Tech has become synonymous with military tech, and remains largely unregulated. Companies now readily supply AI solutions for defence. This shift has occurred so quickly that conventional investor due diligence is struggling to keep pace. In practice, investing in parts of the technology sector is becoming indistinguishable from investing in war.
The industry’s accelerated timelines leave little opportunity for integrating international law or important safety considerations that high-risk tech requires. But market incentives, like war itself, are human-made. Even light-touch safeguards can help prevent dangerous systems from harming people and the planet, through human-rights due diligence, transparency on military applications, and clear red lines on high-risk defence-related AI investments.