So how to teach machines to be reliable and bias-free? Christian considers models of human learning, such as those developed by Jean Piaget, whom Christian finds off on a couple of key assumptions but still a useful guide. It did exactly what it was designed to do.” In other words, the algorithm is returning human biases, just as algorithms do when examining criminal records that often lead to machine-assisted recommendations for sentencing that overwhelmingly give Whites lighter punishments than Blacks and Latinos and color calibration programs for TVs and movie screens that are indexed to white skin. It happened that one of those men was a programmer himself, and he said, “It’s not even the algorithm at fault. The latest examination of the problems and pitfalls of artificial intelligence.Ĭomputer scientist Christian begins this technically rich but accessible discussion of AI with a very real problem: When programming an algorithm to teach a machine analogies and substitutions, researchers discovered that the phrase “man – doctor + woman” came back with the answer “nurse” while “shopkeeper – man + woman” came back with “housewife.” An algorithm designed to examine and label photographs returned the caption “gorillas” when it depicted two African Americans.
0 Comments
Leave a Reply. |