In 2003, the philosopher Nick Bostrom proposed a thought experiment called The Paperclip Maximizer , a provocation designed to show the risks associated with AI that is not in tune with human values. An AI is programmed to maximize the production of paperclips, producing as many as possible, optimizing procurement of all raw materials needed to achieve its goal, over time converting the entire planet into paperclip manufacturing facilities, before identifying and solving the final obstacle standing in its way — humans. Can AI be engineered to integrate fundamental human values in its decisioning, giving weight to our sense of fairness, of right and wrong? But what are these values, and who defines them for the world? Judeo-Christian or Islamic values? Eastern or Western? Those of G20 Nations or developing countries? Generational values? As AI optimizes trade-offs to achieve its task, how will it know the decisions that are best for society? I believe marketers will soon begin
Visit www.RehumanizeBrands.com – the intersection of business and human behavior.