This second post discusses two issues raised from the audience in the Q&A with the excellent NACD panel discussing Board best practices relating to the digital disruption that the new economy inexorably will require for company survival and profitability, goals central to the mission of directors.
This is an age where businesses are being held accountable for the impact they have on society; examples include most recently Facebook, Google, energy companies, opiate manufacturers. The panel had the task of educating directors on their duties to engage and advance actions of digital disruption; otherwise, their companies will suffer greatly. In light of this awareness, two questions rose from the floor.
The first question was mine: in this context, should Boards today consider the proposition advanced by Professor Yuval Harari, in his current best-selling book 21 Lessons for the 21st Century, which suggests that the convergence of AI and big data and the digital revolution is anti-democratic and contributes to a growing wealth gap, as those in control of AI and data pull ahead and create a super-wealthy, controlling elite?
The second question, from a member of the Bentley faculty (I believe) working in corporate governance, asked if Boards today should be considering personal privacy and other ethical considerations while undertaking strategic planning.
One director answered the first question, albeit only partially: she would not want to be on a Board where the company did not consider the impact of technology on the jobs of its employees.
One director answered the second question also, to the effect that Boards had a duty to evaluate risk in its strategy (presumably suggesting that violations of privacy or other rights created company risk).
Now let me speculate, and I admit I come at the following from an arguably biased starting point (and indeed was one of the two people raising this type of issue).
First, I found it telling that, although there was much panel interchange on almost every other question, each of these two questions was answered, after a pause by the panel, by one person offering an answer to fill the void. No one else wanted to touch these issues, it seemed.
Second, after the second question the next speaker started his own question, which was admittedly more in tune with the prior discussions, with a phrase something like “getting back down to earth….” I got the sense that the speaker was put off by broader speculation about societal and ethical ramifications of the technological revolution which the panel was explaining how to implement effectively; he needed nuts and bolts information to get on with the task.
If I am correct and neither paranoid nor argumentative, this wonderful panel (and it was indeed great) embodies the problem raised. We move forward with progress, driving business efficiency, better service to customers and consumers, higher productivity and profits– I do not argue with these goals. But consideration of long-term societal and ethical costs are the business of philosophers and social futurists, and not today’s Boards.
I hope I am wrong, but if companies do not consider these issues now, strategically and to some degree in concert as part of a general awareness, then by the time that technology, AI, big data and the integration of society into the new technological order is complete, that is the time when the insights of the philosophers and social futurists will be, by definition, too late.