Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies — such as image labeling — receive large (undisclosed) portions of their revenue from the defense industry.
Investors in AI-first technology companies that aren’t even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties.
Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.
The first step in taking responsibility is knowing what on earth is going on. It’s easy for startup investors to shrug off the need to know what’s going on inside AI-based models.
However, there are also some less positive examples — technology made by Israeli cyber-intelligence firm NSO was used to hack 37 smartphones belonging to journalists, human-rights activists, business executives and the fiancée of murdered Saudi journalist Jamal Khashoggi, according to a report by The Washington Post and 16 media partners. The report claims the phones were on a list of over 50,000 numbers based in countries that surveil their citizens and are known to have hired the services of the Israeli firm.
Investors in these companies may now be asked challenging questions by other founders, limited partners and governments about whether the technology is too powerful, enables too much or is applied too broadly. These are questions of degree, but are sometimes not even asked upon making an investment.
I’ve had the privilege of talking to a lot of people with lots of perspectives — CEOs of big companies, founders of (currently!) small companies and politicians — since publishing “The AI-First Company” and investing in such firms for the better part of a decade. I’ve been getting one important question over and over again: How do investors ensure that the startups in which they invest responsibly apply AI?
Let’s be frank: It’s easy for startup investors to hand-wave away such an important question by saying something like, “It’s so hard to tell when we invest.” Startups are nascent forms of something to come. However, AI-first startups are working with something powerful from day one: Tools that allow leverage far beyond our physical, intellectual and temporal reach.
AI not only gives people the ability to put their hands around heavier objects (robots) or get their heads around more data (analytics), it also gives them the ability to bend their minds around time (predictions). When people can make predictions and learn as they play out, they can learn fast. When people can learn fast, they can act fast.
Like any tool, one can use these tools for good or for bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder for beautiful fireworks or firing bullets.
Substantially similar, AI-based computer vision models can be used to figure out the moves of a dance group or a terrorist group. AI-powered drones can aim a camera at us while going off ski jumps, but they can also aim a gun at us.
This article covers the basics, metrics and politics of responsibly investing in AI-first companies.
The basicsInvestors in and board members of AI-first companies must take at least partial responsibility for the decisions of the companies in which they invest.
Investors influence founders, whether they intend to or not. Founders constantly ask investors about what products to build, which customers to approach and which deals to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed because they may be a valuable source of capital.