How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

by Gerd Gigerenzer

Narrated by Joel Richards

Unabridged — 10 hours, 9 minutes

How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

by Gerd Gigerenzer

Narrated by Joel Richards

Unabridged — 10 hours, 9 minutes

Audiobook (Digital)

$24.99
FREE With a B&N Audiobooks Subscription | Cancel Anytime
$0.00

Free with a B&N Audiobooks Subscription | Cancel Anytime

START FREE TRIAL

Already Subscribed? 

Sign in to Your BN.com Account


Listen on the free Barnes & Noble NOOK app


Related collections and offers

FREE

with a B&N Audiobooks Subscription

Or Pay $24.99

Overview

Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place-while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all agree: machines will soon do everything better than humans. How to Stay Smart in a Smart World shows why that's not true, and tells us how we can stay in charge in a world populated by algorithms.



Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gerd Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent "black box" algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy, but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the "like" button. We shouldn't trust smart technology unconditionally, Gigerenzer tells us, but we shouldn't fear it unthinkingly, either.

Editorial Reviews

Publishers Weekly

05/30/2022

Gigerenzer (Risk Savvy), director emeritus at the Max Planck Institute for Human Development, offers plausible reassurance for those who fear artificial intelligence is poised to take over human decision-making. Things that AI can do well, Gigerenzer explains, such as playing chess, occur in strict rules-based environments where there’s little or no chance of something unpredictable happening. The AI Watson’s vaunted Jeopardy! victory over human champions Ken Jennings and Brad Rutter, for example, was less impressive than it appears, Gigerenzer writes, as it was the result of an altered game in which certain kinds of questions were excluded because it was anticipated that the AI wouldn’t be able to answer them accurately. Gigerenzer also covers more pressing issues, among them self-driving cars that are unable to accurately assess dangers to pedestrians, tech and ads that are designed to demand attention and distract users, and the large-scale voluntary abandonment of privacy. It amounts to a solid case against “unconditional trust in complex algorithms,” arguing that “more computing power and bigger data” won’t bridge the gap between machine and mind, because most problems humans face involve “situations in which uncertainty abounds.” Anyone worried about the age of AI will sleep better after reading this intelligent account. (Aug.)

From the Publisher

"Anyone worried about the age of AI will sleep better after reading this intelligent account."
Publishers Weekly

“A seriously compelling, eye opening, and well researched investigation.”
Library Journal

“Persuasive.”
The Times UK 

“Gigerenzer deftly explains the limits and dangers of technology and AI.”
New Scientist

"Essential reading for anyone exposed to technology that shapes our behavior rather than meeting our needs. In other words, it is essential reading for all of us.”
Morning Star

Library Journal

07/01/2022

According to psychologist Gigerenzer (Calculated Risks: How To Know When Numbers Deceive You), with our technologically-centered world increasingly driven by artificial intelligence (AI), it's important to understand how algorithms work within AI. Grasping what algorithms can do well and understanding their limitations is the key to staying in charge of our lives. Gigerenzer reminds that AI works best in a stable world situation (with little unpredictable human behavior). AI is good at playing chess, analyzing health data, and assisting the field of astronomy, but comes up short with dating apps, predictive policing software, and fully self-driving cars. After providing readers with numerous examples of myriad uses of algorithms in our daily life, the author turns his attention to exploring other tech minefields, such as our willingness to hand over personal data to companies like Google and Facebook, resulting in a now ubiquitous form of economy known as surveillance capitalism. VERDICT Gigerenzer explains why technology is so addictive and offers tips for fostering digital self-control. A seriously compelling, eye-opening, and well-researched investigation.—Ragan O'Malley

Product Details

BN ID: 2940175632454
Publisher: Tantor Audio
Publication date: 08/02/2022
Edition description: Unabridged
From the B&N Reads Blog

Customer Reviews