Monday, March 26, 2018

A philosopher you recently discovered...

This week, faculty members write about a philosopher they recently discovered, and what they like about them.

One author who has come to my attention over and over again recently is Deborah Rhode, Ernest W. McFarland Professor of Law at Stanford University. She has written extensively on legal ethics, gender and the law, and other areas of law and ethics. I cited her articles critiquing the teaching of ethics in law schools in my article on service learning. I required students to read a chapter of her anthology, Ethics in Practice: Lawyers’ Roles, Responsibilities, and Regulation, in my professional ethics course. She co-edited, along with one of my former advisors, the hornbook on legal ethics (i.e., the textbook used to teach ethics in law schools). Most recently, I came across her book on discrimination on the basis of appearance, The Beauty Bias, as we were looking for guest speakers for this year's Nammour Symposium (we asked; she could not make it, but wished us all the best on this important topic). She is a lawyer by training, but students who are interested in legal ethics would do well to become familiar with her work (start with her article, Ethics in Practice, in the anthology mentioned above). 
Chong Choe-Smith

    I wanted to draw attention to a non-philosopher, the economist Peter Leeson, who’s written what amounts to a fun introduction to rational choice decision and game theory. The recent book, WTF?! is, as the book’s subtitle suggests, an economic tour of the weird. Some of the most outlandish practices in history have been solutions people hit upon for solving pressing social problems of their time and place.
    Leeson pushes the rational actor model to its limits, showing how seemingly senseless and/or barbaric practices -- like burning witches, selling wives, and holding trials by combat or ordeal and of animals -- were (and in some cases, are) sensibly grounded in expected benefits, given the set of prevailing beliefs and values and other constraints.
    But are old-timey superstitions relevant to us, the enlightened, in our situations today? Read this short piece by Leeson on polygraph tests and think about how trial by the ordeal of walking on red-hot ploughshares could have provided a similar sorting mechanism.
Kyle Swan

    Anna Marmodoro is an Italian metaphysician teaching at Oxford. I first discovered her while researching Anaxagoras’ theory of ‘homeomerous seeds,’ which seemed to me a gunk theory very advanced for its time (460 BC). Marmodoro’s book on Anaxagoras, Everything in Everything, confirmed this for me. This led me to other work of hers, in which her powers account of causality leads her to a neo-aristotelian ‘hylomorphic’ theory of objects.
    Since I had come to the conclusion that Galilean elementalism as an explanatory strategy, having produced the Scientific Revolution, has now run its course, much in the same way the medieval Aristotelian synthesis ran its course in the 16th century, I was interested.
    Marmodoro’s hylomorphism is ‘holistic’ in that matter and form are not parts or constituents of the substance. If the substance is composite, such as organism, its organs cease to have an independent existence but are ontologically subsumed by the whole. So substances like organisms are the fundamental reality, not the elements composing them.
    This constitutes an attractive solution to the metaphysical ‘Problem of Composition’: How are the elements in the Cap’n Crunch my grandson Matthew eats for breakfast related to Matthew? Are grandchildren emergent entities out of what they eat, thus rendering them either epiphenomenal or redundant? Or is ‘Matthew’ merely a heuristic concept, a way of thinking about stuff temporarily arranged a certain way. Either way, Matthew becomes dubious or derivative entity.
    I’m a realist about grandchildren; Marmodoro’s theory is one way of making sense of that.

Everything in Everything, OUP 2017
“Aristotle’s Hylomorphism, Without Reconditioning,” Philosophical Inquiry 36:5-22
Thomas Pyne

    David Hilbert claimed that mathematics was sloppy and should be made more rigorous by formalizing its theories in, say, predicate logic, and then showing both that a theory's truths are provable and that anything provable is true. Gödel's Incompleteness Theorem of 1931 upset Hilbert's program because it established that any proposed formal arithmetic must have unprovable truths. If we were to add this formal arithmetic's unproved truths as new axioms, the new arithmetic would have other unprovable truths. In 1950, Abraham Robinson, a Yale philosopher of mathematics, discovered the absolute minimum number of assumptions needed to carry out Gödel's proof. The elegant result is now called Robinson Arithmetic, which I am teaching this semester in Phil. 160.
    Robinson's second, original idea is that any proposed formal theory of real numbers will have two unintended consequences: (1) infinite numbers bigger than any real number, and (2) Leibniz-like infinitesimal numbers. This is a surprising limitation on the ability to formalize any science that uses math.
    Robinson's third, original idea was to create a calculus true to the spirit of Leibniz's idea that speed is an infinitesimal change in distance divided by an infinitesimal change in time. This is an easier-to-learn calculus than the kind taught at Sac State with epsilons and deltas. But very much like the metric system that cannot replace the entrenched English system of units in the U.S., U.S. universities resist the wholesale change to Robinson's nonstandard calculus even though they admit it is a more intuitive calculus.
Brad Dowden

    I’ve become acquainted with Tamar Gendler’s work fairly recently and I’m attracted to her notion of alief, which she introduces for the purpose of explaining belief-discordant behavior.
    Briefly, the question here is whether people can really believe something when their behavior clearly suggests otherwise. Can I really believe that airplanes are safer than cars when I am terrified of flying but not of driving? Can I really believe that men and women are equally intelligent when I routinely defer to the opinions of men? Can I really believe that turd-shaped chocolates are perfectly tasty when I am disgusted by the idea of eating one?
    Behaviorists say no. To believe something just is to act as if it is true. Behaviorists happily allow that people are often poor judges of what they believe. In cases like these we unwittingly report what we think we ought to believe rather than what we really do believe.
    But Gendler subscribes to a more traditional intellectualist view according to which what we believe is what we sincerely reflectively endorse; what we will tell other people when we want them to know what is true.
    Gendler argues that if this is the case then we need another notion to explain belief discordant behavior. What we believe is one thing, what we alieve is another. Alief is a concept that fits nicely within Kahneman’s concept of System 1 thinking: intuitive, associative, rapid and effortless forms of inference that lie largely beyond our voluntary control. 

Randolph Mayes

    Dallas Willard (1935-2013), a philosopher who taught at the University of Southern California, is someone I only met once, but read and listen to quite a bit. An idea he had, and I like, is that one’s personal philosophy will inevitably will be embodied in one’s actual life. So one way for each of us to evaluate a personal philosophy is to see how it seems to work out for individuals who believe it and embody it. Willard’s own life is one example of this, a nice glimpse of which can be seen from one page his family and friends have developed to honor him, especially its “about” section and a “tribute” by his USC philosophy colleague Scott Soames.
Russell DiSilvestro

    I was recently re-introduced to philosopher Eleonore Stump through this talk. I found it so interesting I decided to purchase her book, Wandering in Darkness: Narrative and the Problem of Suffering. I came across Stump some 20 years ago when she presented at a lecture series on faith and the problem of evil. These lectures, interestingly, were the first of her attempts to articulate what was later to become the book I am reading now.
    I am only on page 27, and there is no guarantee that I will finish it. From what I have read so far, I have really enjoyed. Stump writes that she prefers to frame the problem of evil as a problem of suffering, as it is suffering rather than evil or pain that can undermine the desires of our hearts.
Also, I really like this quote:

At its best, the style of philosophy practiced by analytic philosophy can be very good even at large and important problems… But left to itself, because it values intricate technically expert argument, the analytic approach has a tendency to focus more and more on less and less; and so, at its worst, it can become plodding, pedestrian, sterile, and inadequate in its task… (p. 24-25)
    I hope I make it to the end of the book. I think I can learn a lot by reading it. In the meantime, I’ve got to get through these midterm papers, some of which seem to be undermining the desires of my heart.
Dorcas Chung

I would recommend Porphyry of Tyre, who was, like Pythagoras, an advocate of vegetarianism on spiritual and ethical grounds. These two philosophers are perhaps the most famous vegetarians of classical antiquity. He wrote the On Abstinence from Animal Food (Περὶ ἀποχῆς ἐμψύχων; De Abstinentia ab Esu Animalium), advocating against the consumption of animals, and he is cited with approval in vegetarian literature up to the present day.
Clovis Karam

Cambridge was home for some of the most towering geniuses of the 20th century: Ludwig Wittgenstein, Bertrand Russell, John Maynard Keynes, G.E. Moore. But one name gets far less attention than it deserves: Frank Ramsey.
    A genuine polymath, Ramsey accomplished more by his death at the tragically young age of 26 than most intellectuals do in lives three times as long. He began learning German at 18, and by 19 produced the first English translation of Wittgenstein's Tractatus. Shortly after turning 20 he traveled to Austria and in the space of two weeks convinced Wittgenstein that there were fundamental flaws in his argument, prompting Wittgenstein to later return to Cambridge to set things right.
    The year after that he produced a new branch of mathematics (today called "Ramsey Theory") which pertains to how order is understood in mathematical structures. In the next two years he wrote two papers in economics that transformed thinking on taxation and savings, founding a new branch of the discipline now known as 'optimal accumulation.'
    His philosophical work was no less influential. His theory of truth (known as the 'Redundancy Theory') dissolved several problems that had haunted philosophers since Plato. His analysis of theoretical vs. observable terms in philosophy of science inspired both Rudolf Carnap and David Lewis. His work on subjective probability was a cornerstone for von Neumann's development of game theory.
    Try to imagine how all of these disciplines would have been transformed if Ramsey had not died shortly before his 27th birthday.
Garret Merriam

    I recently got to know the work of Diane Proudfoot. She is a professor of Philosophy at the University of Canterbury, in New Zealand. She works in several areas, including the history and philosophy of computer science, Turing, and philosophy of religion. I was looking for works in the philosophy of Artificial Intelligence and found her chapter “Software Immortals: Science or Faith?”, which is part of the book Singularity Hypothesis: A Scientific and Philosophical Assessment. We are witnessing a growing body of researchers talking about unsavory futures brought about by our accelerating technological progress. Some, like Oxford University philosopher Nick Bostrom, warns that human extinction is one of them. Futurist Ray Kurzweil predicts that digital enhancements will replace the messy mortal flesh in a few years. Since the robot-takes-control scenario has been a favorite of science fiction for a long time, it’s difficult to separate speculation from scientifically justified hypotheses. Proudfoot’s chapter was a refreshing and surprising reading for me, as a newcomer to the topic of digital minds and technological promises. She compares the promises of AI with those of religion. Both are supernaturalist proposals based on faith, she argues. And techno-supernaturalism, as she calls it, does not do a better work compared to old religions, especially as a way of dealing with humans’ fear of death. Techno-naturalism, she writes, “can be seen as a new-and-improved therapy for death anxiety, based on AI and neuroscience rather than on revelation”.
Saray Ayala-López

No comments:

Post a Comment