Humanizing technology: How to apply a code of ethics | Boardroom Events
Phone | +1 786 361 0454

New to site?


Login

Lost password? (X)

Already have an account?


Signup

(X)

Blog

Humanizing technology: How to apply a code of ethics

Data privacy conversations have finally moved out of corporate boardrooms to family dinner tables following investigations into Facebook and Cambridge Analytica’s role influencing the 2016 US presidential election as well as the long-anticipated implementation of the European Union’s Global Data Protection Regulations (GDPR) in late May.

In the midmarket space, security leaders highlighted the need for companies to have critical ethical discussions regarding their data management and security practices in advance of this month’s launch of Boardroom Events’ Regional Roundtable Series.

While we might like to believe that technology is gender, race and sexually agnostic, former legal CIO Chris Romano argues that the white, male stereotypical profile of a large majority of tech workers will have a profound effect on the development of artificial intelligence.

“There are voices in technology not only wondering aloud where the human factor is in AI, but cautioning against losing it,” Romano noted, pointing to a recent finding by researchers who collected 59.2 million tweets with a high probability of containing African American slang that one tool deemed the posts Danish.

“Technology is not a panacea for discrimination, sexism, and racism, especially if, as managers, we’ve failed on the front end and only hired people who looked, and thought, and acted like us.”

Litha Ramirez is Director of Experience Design and Strategy at SPR, a digital consulting firm based in Chicago. With algorithms based on models of generalized representations of data, Ramirez said concerns arise when those models inadvertently make assumptions that perpetuate stereotypes and limit access to opportunities.

“For instance, if you create a model to predict who is likely to be a successful CEO, you would probably base it on data points about past successful CEOs, which even today are predominantly male,” she explained. “Gender can be encoded into the algorithm and used to rate a potential candidate’s chance of success. This can become a pernicious feedback loop, where the predictive model favors male candidates over females, and impedes a female’s ability to rise to the CEO ranks.”

She suggested four ways companies can expand upon a code of ethics and help humanize rapidly advancing technology.

  1. Data Transparency
    First and foremost, there needs to be transparency about the data that is being collected, how it is used, and the impact of how it is being used down to the individual level. The systems should have explicit opt-in options for users, with easy-to-read and understand (plain English or language of your choice) terms and conditions. Reject the conventions of confounding users in pages of legalese. Individuals need to be able to easily and expeditiously appeal the outcomes of the data models.
  2. Test, Test and Test Again
    Companies should run simulations of their models to understand the impact and the relative effect of the impact. For instance, with predictive policing algorithms, does a false positive send someone to jail or queue them for mental health services? Potential negative impacts can be detrimental.
  3. Regulations and Oversight
    The government has a role to in terms of setting up regulatory and auditing standards to ensure that everyone is playing by the same rules and that the common good is protected and agnostic to profits.
  4. Routine Audits
    Companies should routinely conduct audits of their models to ensure that the outcomes are as expected, that they understand the impact correlations and correct for pernicious feedback loops, and adjust the models to ensure legality, fairness, and grounded in fact.  In the case of machine learning and artificial intelligence, they need to make sure they can trace the evolution of the system’s decision making and modify or arrest it, and reverse its outcome.

She added that recent controversy around targeted digital advertising and the subsequent blocking of predatory payday loans as well as for-profit educational institutions targeting impoverished individuals is unsettling.

“It is pretty simple,” she advised. “Don’t use algorithms to exploit the vulnerable or create pernicious feedback loops that keep those individuals trapped in self-fulfilling cycle. And don’t use user experience patterns to trick or nudge people into choices that go against their or a group’s best self-interest.”


Related Posts
Leave A Comment

Leave A Comment

Join Us!