This case study is based on: Ira Basen, “The Algorithm Method: Programming Our Lives Away” (Toronto Globe and Mail, Nov. 26, 2010).
I. Algorithms and Utility
These questions are based on the article, and utilitarian theory.
- What is an algorithm?
- What is the purpose of adopting one?
- How might algorithms be viewed as utilitarian?
- How might Mill’s discussion of the “doctrine of swine” objection be seen as presenting an algorithm of this sort?
- A “second order decision strategy” is plan for how to make decisions. For example, the decision to “flip a coin” about where to go to dinner presents such a strategy. The decision to delegate future decisions to another agent (ex: giving your keys to someone else before starting drinking) is another. So is the decision to follow an algorithm. Why might utilitarianism recommend some sort of second-order strategy?
- Why does the rise of technologically sophisticated algorithms as described by Basen present a particularly tempting strategy?
- Are there any utilitarian disadvantages to the heavy use of algorithms?
- What is Clay Shirky’s concern (as noted in the article), and how might one address it?
II. Modeling Desire
Basen quotes Christopher Anderson of CUNY:
“I think it’s dangerous when you boil down what people want to a simple mathematical formula,” Mr. Anderson says. “The real danger of these algorithms is that they’re reducing the scope of what ‘want’ means. Want is complicated, and it’s more complicated than clicking on a link.”
Utilitarianism has to have some way to represent human desires.
- What does it mean to have a desire satisfied?
- Google’s algorithm, which (at the beginning, anyway) rated websites by how often they were referenced by other websites (so a site would go up in the rankings if it was frequently linked to, for example). What value can this be said to serve?
- What disadvantage might Google’s strategy have?
- How do you model human desires?
III. The Quantified Self
Watch the following short clip:
Anand Giridharadas,“A Scale that Tweets Your Weight? Meet the Quantified Self?”
- What are two very good reasons to quantify yourself?
- What values does this effort at self-quantification damage?
Scott Peppett writes (citing the Giridharadas video):
“Your choice to quantify your self (for personal preference or profit) thus has deep implications if it necessitates my “choice” to quantify my self under the pressure of unraveling. What if I just wasn’t the sort of person who wanted to know all of this real-time data about myself, but we evolve an economy that requires such measurement? What if quantification is anathema to my aesthetic or psychological makeup; what if it conflicts with the internal architecture around which I have constructed my identity and way of knowing? Is “knowing thyself” at this level, and in this way (through these modalities), autonomy-enhancing or destroying, and for whom? What sorts of people — artists? academics? writers? — will be most denuded or excluded by such a metric-based world?”
Discuss.
IV. References
- Peppett, Scott. “The Quantified Self: Personal Choice and Privacy Problem?” Concurring Opinions (Nov. 16, 2010), at: http://www.concurringopinions.com/archives/2010/11/the-quantified-self-personal-choice-and-privacy-problem.html
- For second-order strategies, see Sunstein, Cass R. and Edna Ullmann-Margalit. “Second-Order Decisions.” Ethics 110 (Oct. 1999), 5-31.