December 20th, 2008
“I’d pretty much pissed away most of my Whuffie — all the savings from the symphonies and the first three theses — drinking myself stupid at the Gazoo, hogging library terminals, pestering profs, until I’d expended all the respect anyone had ever afforded me.” — Cory Doctorow, Down And Out in the Magic Kingdom, 2003
“There are perennial discussions of trust metrics for things like automatic sysopping and (a) general “reputation management” system. It is rightly pointed out (by me and many others!) that such systems are difficult to design properly and often easy to “game”. At the same time, the hope is that a well-designed system would be scalable and informative, while not oppressive or empowering of tyrants.” — Jimmy Whales, 2004
In the not too distant future, trust and transparency will become incredibly important issues for the Web. In a world where increasingly powerful virtual online agents begin to act as proxies for decision making that we humans currently perform, there will become myriad opportunities for disreputable firms to compromise these agents, influencing the decisions that they take against the will of the person they are acting on behalf of.
You might be surprised by how many purchasing decisions robots already make. Black-box trading systems on Wall Street and across the financial markets accounted for over 1/3 of all stock trades in 2006, and will push 50% of volume in 2010 according to the consulting firm Aite Group. Electronic Data Interchange (EDI) systems and other standards based protocols in manufacturing, logistics, and procurement frequently execute purchasing decisions with little or no human intervention. You can event train Amazon.com to automatically send you items based on a schedule you teach it.
What all three of these systems have put in place is an automated system for buying things without having to have a human in the middle of the decision-making process. While Amazon fulfillment might be a simple algorithm, the underlying models for some algorithmic trading systems are as complex as any logic process that you might execute to decide what stocks to buy and when.
In the next few years, we will see web-based services emerge that want to offer that level of “decision-making by proxy” for many of the tasks that you perform manually online today. Things like booking airline flights, making dinner reservations, scheduling appointments and buying goods and services are being targeted for automation by this new class of intelligent agent technology (see the Do Button).
The challenge that emerges is figuring out a “trust model” that will allow these agents broad latitude to execute on your desire while maintaining a high level of transparency that they are acting in your best interest, rather than being unduly influenced by third parties (perhaps through favored relationships with the agent provider).
I’m reminded of the recent dust-up around Facebook data. If you think that having a service like Facebook turn on you and not release your data is wrong, wait until your trusted execution service suddenly decides to change how it behaves because of a new partnership the provider put in place.
Obviously, this kind of conflict could damage the growth potential for intelligence agent technologies. I’m worried that investors in these firms will look at all of the monetization strategies that such a powerful tool will bring in the near term, and make value judgments about the level of objectivity that such a system might require without considering the broader issues of trust and transparency.
I suspect that this problem will not be easily solved. Modeling trust for an intelligent agent will require an understanding of a great many variables:
- Beliefs and biases of the user.
- Beliefs and biases of the user’s trusted network of social relationships
- Transparent knowledge of the biases of the intelligent agent provider
- A model for trading degraded transparency for reward
- A model for adjusting all of the above over time and circumstance
If I am right about how large an impact the Intelligent Agent industry will have on society in the decade ahead, it is incumbent upon interested parties today begin to address these issues in an open and collaborative format.