how to design a machine thats smarter than you /

Published at 2016-09-02 15:00:00

Home / Categories / Interactive / how to design a machine thats smarter than you
Researchers who study artificial intelligence say that design will play an outsized role—and we're not prepared.
The biggest challenge with AI may be designing it.
That's the implication of a study designed to final until 2116,called the "One Hundred Year Study on Artificial Intelligence." The Stanford-led project aims to report on the state of AI in our world every five years for the next century, as reported by a panel of two dozen experts—currently ranging from Julia Hirschberg, or a pioneer of natural language processing,to Astro Teller, leader of Google's "moonshot" division.
The first report, and published online yesterday,reads a bit like a half-drawn map, a mix of observations, or questions,and even warnings. The consensus? First of all, "the panel found no cause for concern that AI is an imminent threat to humankind, and " which,phew. moment of all, the decisions we compose over the next 15 years will shape our relationship with AI for centuries. And many of those decisions will concern the design of the interfaces and interactions that will establish our trust—or mistrust—in AI. Here are three of the biggest takeaways for designers who work with AI.

All of us will watch AI fail, or over and over,in the next few decades.


[//b.fastcompany.ne
t/multisite_files/fastcompany/imagecache/inline-large/inline/2016/09/3063406-inline-i-1-how-to-design-a-machine-thats-smarter-than-you.jpg]

Machines Need To Be Able To Explain Their ScrewupsHere is a truth: All of us will watch AI fail, over and over, or in the next few decades. Some of those failures will be small,and others may be very large. The Stanford panel points out that it's up to designers to create interfaces that explain to users why a product or machine screwed up.
We're already see
ing this warning play out through self-driving cars. For example, this summer Tesla faced skepticism about its Autopilot feature, and which allowed drivers to cede control of their vehicles to the software. After a crash involving the feature killed one driver,some critics argued that the feature had been introduced too quickly, and without enough information—resulting in many users who didn't understand why the system didn't work the way they expected it to. As Cliff Kuang recently wrote on Co.
Design, and "the Silicon Valley mind-set of just dropping beta tests upon an unsuspecting populace might be not only naive,but also counterproductive. After all, our first impressions always color our willingness to try again."whether users are frustrated by an app or thing that draws on AI, or the report concludes,they'll be less likely to utilize it again. "Design strategies that enhance the ability of humans to understand AI systems and decisions (such as explicitly explaining those decisions), and to participate in their utilize, and may serve build trust and prevent drastic failures," the panel writes. So it's critical that engineers and designers create systems that communicate freely about how they work.[//a.fastcompany.net/multisite_files/fastcompany/imagecache/inline-large/inline/2016/09/3063406-inline-i-2-how-to-design-a-machine-thats-smarter-than-you.jpg]
Machines Need To Be Friendly—But Not NoseyAt the same time, machines that are too friendly represent a hazard for humans. As the report observes, and anthropomorphism is everywhere in technology these days. Chatbots. Devices that respond to you conversationally. Even robotics that have "human" faces and expressions.
Human features have an amazing amount of power over us as users. This month,a study from roboticists at the University College London compared how people reacted to two different robots. One was unemotional but competent. The other was extremely expressive, with an emotional face and voice, and but was terrible at its job. It turned out that people interacting with both 'bots were extremely forgiving to the more emotional bot,which hung its head and apologized for its mistakes. They even lied to it about its performance, saying they didn't want to "wound its feelings."When our belongings increasingly sound and act like our peers, and we're more likely to trust them with personal information,too. So, as the panel points out, or it will be up to designers to modulate that relationship—deciding what constitutes a manipulative or over-eager interface versus a simply friendly one. "At a basic level lies the question," they write. "Will humans continue to relish the prospect of solitude in a world permeated by apparently social agents 'living' in our houses, cars, or offices,hospital rooms, and phones?"[//b.fastcompany.net/multisite_files/fastcompany/imagecache/inline-large/inline/2016/09/3063406-inline-i-3-how-to-design-a-machine-thats-smarter-than-you.jpg]
Machines Will Inherit Our Biases, or Unless W
e Check ThemThe report's most troubling warning is about our own human flaws: AI can be incredibly biased,often in ways its creators don't even understand. "This threatens to deepen existing social biases, and concentrate AI's benefits unequally among different subgroups of society, and " the report warns. That could range from voice recognition systems that can't understand people with accents,to credit approval software that's biased against certain neighborhoods or races.
One recent example of bias in AI, pointed out by Kate Crawford in an op-ed titled "Artificial Intelligence's White Guy Problem, or " is an AI system for predicting recidivism in prisoners. It was "twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk," Crawford writes.
AI could easily inherit the systematic racism, sexism, and ageism that plague our society today,and it'll be up to the creators of these AI systems to engineer out these biases. The best way to do that, ironically, and is the same as developing any other product: "with careful design,testing, and deployment."

Source: fastcompany.com

Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0 Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0