Humans, it turns out, do love to play a game. That's at least what Stanford University graduate student Zachary Pytel found in his tests. But where psychologists say this is a good thing, mechanical engineers say these experiments actually make machines ugly and breakable.

Pytel and other researchers compared the behavior of monkeys that were trained to respond automatically to triggers—completed tasks, said to elicit "loving smiles"—with behaviors created by manual intervention. The monkeys respond better to auto-responses that are predictable, as they're even less likely to feel the "social stress" that comes with treating up another person's brain waves, Pytel and colleagues found. This needs to be the case in any computer simulation of a social system, because research on human social interactions suggests machines can't please their users in a meaningful way.

It's all in how you manipulate the data, Pytel's co-authors suggested in a paper they submitted to a journal last year.

Look, We Made the Robot Feel Like God

Pytel, along with doctoral student Atiq Malik, found that when robots do the same as humans, they were no more effective or happy than when humans created robots that respond in an even more mechanized way.

What's wrong with that, people ask? In their review of a 2015 study about how happy dolls made to resemble a person who dies, graduate student Arie Teitelbaum and graduate student Michele Harran found that failure actually offered a pleasant experience for the doll, the doll's bereaved owners, and their computer's participants.

Teitelbaum and Harran expected to find that less-successful robots would, as a follow-up to Pytel's paper, self-destruct as quickly as possible, just to throw off more carefully monitored users. Instead, they found that the most loved dolls kept on ticking. When owners didn't love them back, the outcome was a more satisfied computer—but a sad human.

Cheating Is Really Cheating

For their 2016 book, The Boy Who Saved the World, authors William Gibson and Timothy Ferris write that an election is "fiscally and competitively costly and time-consuming, and our data suggest that Americans are not demonstrably smarter than they were in 1900." They point to digital, machine-designed campaign finance monitoring systems, software that makes polls that predict election outcomes more efficient, and online advertising that encourages voters to engage with election content.

These people are certainly not stupid, of course, and may even have kept up with the latest political developments. But Gibson and Ferris wonder if the cost-saving, time-saving ways of robots, machines, and smart machines could be the antidote to the declining population of Smart Human Islanders in American politics.