Practical Ethics Anecdote

I was told a story once by a philosopher I know, about a friend of theirs who was taking business ethics.

After the first day of the class, this business ethics student told their philosopher friend “today in class we learned the difference between right and wrong!”

“Oh, really? What is that?”

“What’s wrong” they replied, “is what’s in your short term interest, and what’s right is what’s in your long term interest.”

I think about that story a lot.

Believing What You Publish

Alexandra Plakias has an interesting piece, titled “Publishing Without Belief“, that I think I agree with in spirit, but not in the details.

Plakias examines some cases where a paper is published that the author doesn’t believe, and defends against charges of impermissibility.  Notably, she says:

In looking for the norm that PWB violates, we might begin with the thought that publication is a subspecies of assertion, and is therefore subject to the norms of assertion. Williamson (19962000) defends an account of knowledge as the norm of assertion; others have offered truth (Weiner 2005), justified belief and (the weakest of these) belief. But all of these are too strong to serve as norms of philosophical publishing.


Furthermore if it is ‘platitudinous’ that ‘asserting something is … claiming that it is true’ (Wright 1992: 23), the link between assertion and truth is stronger than the link between philosophical publishing and truth ought to be: it should not be a platitudethat publication is a claim to truth.

The line of thinking Plakias rejects here is one that I am sympathetic to.  I think that we largely write papers in an assertoric mode, when we publish them we are taking steps to widely disseminate a set of our assertions, and I think that there is no difference in this sort of case between asserting “parthood is antisymmetric” and “it is true that parthood is antisymmetric”.  There are historical modes of publication that are not essentially assertoric (such as dialogues, meditations, etc.) but we are far less diverse in our genres and forms of writing than we once were, and so, most papers are best understood as a series of claims made by the author.

At this point it probably sounds like I am in strong disagreement with Plakias, but I don’t think that I am.  I think that having first order philosophical views is overrated, and I think people without such views can and should make contributions to the ongoing discussions in the field.  My position is that one should be asserting the “higher order” position to which they are actually committed.  For example, if one writes a paper defending expressivism about moral terms from the Frege-Geach problem, the paper should, I think, include sentences asserting things like “this response to the Frege-Geach problem overcomes the worry”.  I don’t think the person writing this paper should say “moral terms express non-cognitive attitudes, rather than beliefs” if that isn’t something they wish to assert.

Is this quibbling? I don’t think so.  We have a clear, dominant genre of philosophical writing. It is first person prose exposition in an assertoric mode.  If someone writes a paper in which the sentence “externalists about epistemology cannot answer the generality problem” and this isn’t part of a quote being attributed to someone else, or otherwise couched as not being asserted, I should be licensed to say “that author said that externalists about epistemology cannot answer the generality problem”.

So my stance is that you should believe what you publish, but that it is fine to publish “higher order” conclusions like x is worth investigating or y is not very promising as a response to z.

Graduate Dissertation Seminar / Structure and Reflections

This term, I have been teaching the Graduate Dissertation Seminar at UB for the first time.  It is a course designed by my colleague Neil Williams which aims to give students who are nearing the end of their coursework or who are at the ABD stage experience presenting their work, give them some feedback on the substance of their work, and—perhaps most importantly—give them feedback and instruction on their presentation skills.  While I tweaked some aspects of Neil’s course, most of the course structure is due directly to his design (my main tweak was to have APA-style presentations with commenters in the mix).  I am going to talk a bit about the structure of the class first, and then offer some reflections from this midway point of teaching it.

Course Structure:

In the class, we talk about a variety of issues that fall broadly under the heading of “professionalization”, and since a big part of being in the class is giving feedback to each other, I’ve been integrating discussions of our service roles in the profession as well.  Each student in the class has to do the each of the following discrete tasks during the term, in addition to attending the seminar and being an active participant in the discussion sessions after other students’ presentations:

  • One 45 minute paper presentation (ideally on material related to their planned dissertation topic)
  • One 25 minute APA-style presentation
  • Serve as a commenter on another student’s presentation
  • Two times during the term: write a referee report on another student’s longer paper

The initial scheduling of all the presentations and commenting and deadlines was a bit involved to sort out, but not all that painful.  The main things i had to try to make sure of were that people’s various tasks were spaced out reasonably and no student had all their tasks coming due at the same time.

For the time-tables, the short-paper must be sent to the commenter two weeks in advance of the week it will be presented, an the comments must be sent back one week in advance of the presentation.  This is more compressed than the time-tables we usually face for conferences, but not unreasonable for this exercise for our class.  Students presenting long papers have to distribute the long papers to the entire class the day before their presentation (in part to ensure that they are presenting a paper that exists in draft form, and not “winging it”), and referee reports are due to me one week after the paper has been distributed.  I don’t monitor the reports for accuracy of feedback or insight of criticisms or the like. I review them to ensure that students have followed the correct form of the referee report, conducted a good faith effort to write a report on the paper, and monitor for (presumably unintentional) problems with tone.  I pass those reports along, quasi-anonymized, to the author of the paper, with the instructions to not only take the feedback into account, but also to think about what the reports are doing that are more or less helpful to them as an author, so that when they are giving people feedback on work, they can craft their feedback in a way that is easier to take advantage of.

Each class meeting—after we address any logistics issues and make sure everyone is on the same page about upcoming deadlines—begins with the APA style presentation and comments and reply, followed by Q&A, and then feedback on their presentation. Then, we take a short break, and when we return, we have the longer “Job talk” style presentation, with Q&A, and then feedback.  Students must use a handout, (limited to one side of one page for APA-style presentation, allowing an optional second side for listing cases or for diagrams or quotes, and limited to two sides of one page for the longer presentation).  The time limits are strictly enforced (students are cut off, possibly mid-sentence, when the allotted time is reached). All students are expected to generally participate in Q&A, whether or not the topic of the paper is one they work on, or have antecedent interest in or familiarity with. This doesn’t mean they have to have a question every time for every paper, but, in general, everyone needs to do their part to make sure that no presentation encounters crickets during the Q&A session. Typically no follow-up questions are allowed.

After we conclude the presentation and discussion, we transition into feedback (on their handout, the structure of their presentation, their tone, their pacing, etc.). I offer my feedback first, before opening things up to their peers to provide feedback.  The watchword for this feedback is to “try to be Hufflepuff”.  Which is to say, keep in mind that getting feedback on a presentation one has just given in front of one’s peers is daunting and and nerve-wracking, and so, when you are criticizing someone in that situation, make sure you are speaking from a place of kindness and empathy.

Reflections on the Course So Far:

I am really happy with how the course has been going so far.  First off, I am learning a lot about what the students are working on, which is interesting in and of itself.  But more to the point of the course, I think getting students to think explicitly about some of these issues is really helpful for them.  I am often frustrated that the structure of graduate curricula is not simply reverse engineered from an enumeration of the skillsets our students will need to be deploying in their future employment.  For instance, if you built the graduate curriculum around the skills called for in the jobs we are preparing our students for, you would simply expect to see a much larger explicit focus on pedagogical training in graduate programs than we do in fact see on the whole.  You would expect to see at least some training in the skills required for doing a good job at the main sorts of service work we are expected to do (e.g. some explicit training in how to write a helpful referee report). And, you might expect to see some training in things like how to be effective at presenting your papers.  What makes a handout a useful complement to your talk, rather than a distraction from it. And so on.  So one thing I really like about this course, and am really happy with my department for adding it to our curriculum and with Neil for developing it the way he did, is that it just seems like the sort of thing that we ought to be doing for our students as part of their graduate educations

Some more pointed thoughts:

  • The fact that we have nine students working on pretty disparate topics was prima facie worrisome, but turns out to be a boon.  It is really, really, helpful to have an audience that includes people without much background on a topic, when you want to be giving someone feedback on how clearly they were able to cover the exposition of a debate or position. And while some presentation contexts will consist of mostly specialists (APA colloquia sessions and topical conferences tend to have audiences who are mostly already interested in the topic of the talk), lots of other contexts (job talks, regional conferences, grad conferences, etc.) often have pretty diverse audiences in terms of background and interests, and being able to pitch your talk to smart people who just aren’t familiar with the background of your talk is a really important skill.
  • Different sub-areas each seem to have their own set of hazards that people working in those areas need to be aware of.  For some areas, the material is very technical, and the risk is losing any non-technicians in the audience if you can’t ground your talk in something concrete pretty early on. For others, it is a distinctive set of jargon or vocabulary that needs to be unpacked for audiences that don’t have familiarity with it. For some it is alternative methodologies that need to be repackaged for audiences that don’t know how to understand them.  As noted, this class really helps draw out these things, since logic specialists are presenting to a room with historians, continental philosophers, aestheticians, ethicists, philosophers of science, etc. and vice versa.  The dissertation phase is often a phase where one principally interacts only with other specialists in their area, and can easily lose track of how to present their ideas to people outside that sub-discipline.
  • Talking to students about tone seems like something that is really important, and not done frequently enough.  People have a tendency to form snap judgments about people based on the tone in which they, say, ask a question at a conference, and more or less the same question asked one way can prompt someone to think “oh, that jerk has such a high opinion of themselves, and they think they know everything, ugh” while asked a different way would simply prompt the thought, “that was an interesting question, maybe I’ll follow up with them about it after the talk.”  So, I’ve been pointing out to students when, e.g. their referee report reads like they are writing comments for an undergraduate in their class, rather than providing feedback to a presumed peer.
  • The students have been pretty good on the whole at being Hufflepuff about their feedback to each other.  They often highlight what they thought was particularly well done in the presentations, rather than simply focusing on criticisms. I could probably stand to take some lessons from them on this.
  • One thing I haven’t been doing, but would like to do, is focusing on how to ask good questions during Q&A.  Ideally, I’d like to come up with some guidelines for what sorts of questions to ask during Q&A.  I know that I don’t like when questions seem to be about “point-scoring”, but this is something that I need to think more about before I have really worked out views on all this.  I do think it is important for people to appreciate the importance of constructive questions (which are going to be questions that share at least some key presuppositions of the person giving the talk), as opposed to simply questions that challenge the position of the paper, but this is something I need to think more about.
  • I am giving people a lot of feedback on these things, but its not like i got a lot of explicit training about this stuff. Nor do I think I am some sort of savant who just magically knows all of it.  So, do I know what I am talking about? Maybe? A bit? I mean, I have been thinking about these issues a lot, and I do encourage the students to take my feedback with a grain of salt, and balance it against the feedback they are getting from other students.  Mostly, I think getting the students to think about these issues is valuable, and I hope that the advice i am giving is useful above and beyond that, but I don’t think I have some perfect insights into all of these issues.

Lastly, here are some of the things that have come up a bunch that I am trying to instill into my students in this class:

Cardinal Sins for Presentations in Graduate Dissertation Seminar (partial list):

  1. Running over your allotted time. I think this is disrespectful to the audience, as it typically requires cutting into the Q&A, which is sort of like telling the audience that you don’t care about getting their feedback on your work, you just wanted them to listen to you talk.  No one will be upset about a good talk that ends a few minutes early, but a great talk that goes long could easily leave you with angry audience members.
  2. Reading your paper (rather than presenting your paper).  Reading a paper out loud so that it is easy to follow and engaging for the audience is a difficult skill that very few people have.  Most of us are far, far better at talking extemporaneously (even if it feels more comfortable to us to read).  While I don’t have a blanket condemnation of this practice in the wild, I do forbid it for the students’ presentations in this class, and think presenting rather than reading will serve most of them much better overall.
  3. Undermining your commenter (e.g. by omitting or correcting things that are addressed in the comments). This is disrespectful to the commenter, and contrary to the norms of presentations, but a surprising number of people in the profession don’t realize this, and think that if the point made in the comments is a good one, they should *correct* the paper when they present it.  Of course, then the commenter looks silly talking about an issue in the paper that doesn’t exist.  This is easy to avoid if you are reading your paper, because then you can just read the version you gave to the commenter. But if you present the paper rather than reading, it means you have to make a note to present in a way that ensures your commenter’s comments will still make sense.
  4. Not having a handout (this might be somewhat controversial).  If your talk is longer than five minutes, then, even people with very good memories will not be able to recall the beginning of your talk in sufficient detail when your talk is over.  Even if you are using a powerpoint, it is not helpful to the audience member who wants to compare the claim made on slide 4 and slide 15, because they can only see slide 4 again once they have already been called on to ask their question.  So, I think one should always remember that your handout is not just for following along during the talk, but for helping the audience remember how the beginning and middle of the talk proceeded, when they are thinking back on the talk.
  5. Cramming too much on your handout.  Distracts the audience, leads you to omit addressing things on the handout, and prevents the handout from doing a good job of conveying the relative importance of the things you are covering in your talk (which is signaled in part by which things were important enough to be included on the handout).

Two Pieces of Really General Advice:

  1. Begin your talk by stating the goals/objectives of the talk.  Lots of talks go off the rails simply because of mismatched understanding of what the speaker is up to.  The audience thinks the speaker is trying to show that a given view is untenable, the speaker just means to be showing that one argument for the view is unpersuasive (but happens to also think the view is untenable), this confusion bleeds into the talk and the Q&A, and so, instead of focusing on the speaker’s interesting challenge to the argument for that view, everything gets bogged down into a discussion of some other argument for that view.  That can be avoided if the speaker opens with explicitly telling the audience where the goalposts are being set, so that they audience knows how to determine if the speaker’s argument has been successful.
  2. Clearly demarcate your contributions. When you are entrenched in a debate, you are so familiar with which problems/solutions/etc. are “out there” in the literature, that you don’t always think to make explicit “this is the part that I came up with”, because your advisor or another specialist would spot it in an instant. But for any talk where your audience isn’t just specialists, it is really important that you signal to them which parts of your talk are you catching them up on a debate that was already happening, and which parts are your contributions to the debate.

An Open Letter to Neil deGrasse Tyson

Dear Dr. Tyson,

Like many philosophers, I was very disheartened by some of your recent remarks about the study of philosophy.  I don’t think your views about the worth of philosophy are especially unusual, but since you are one of the world’s foremost public intellectuals, and a committed champion of inquiry, I hope I can persuade you to rethink things a bit.  Because I don’t think valuing philosophical inquiry is at odds with valuing scientific inquiry.  Firstly, the whole idea of even treating them as distinct from each other is a fairly recent shift.  As an early modernist, the list of philosophers I study has pretty striking overlap with lists of early modern chemists, physicists, and biologists.  That’s not to say I think we were wrong to start distinguishing between the two forms of inquiry; it is just to point out that figures like Newton and Leibniz, were scientists, mathematicians and philosophers.  I would like to think that you and I should be, in some broad sense, partners against a rising tide of anti-intellectualism.

There’s a passage from John Stuart Mill that I love.  In a work laying out his picture of the scientific method, and outlining the proper approach to inquiry, he starts with a discussion of language.  And he feels the need to explain to the reader why he would begin that way.  So he defends himself by saying this:

It is so much the established practice of writers on logic to commence their treatises by a few general observations (in most cases, it is true, rather meagre) on Terms and their varieties, that it will, perhaps, scarcely be required from me in merely following the common usage, to be as particular in assigning my reasons, as it is usually expected those who deviate from it.

The practice, indeed, is recommended by considerations far too obvious to require a formal justification. Logic is a portion of the Art of Thinking: Language is evidently, and by the admission of all philosophers, one of the principal instruments or helps of thought; and any imperfection in the instrument, or in the mode of employing it, is confessedly liable, still more than in almost any other art, to confuse and impede the process, and destroy all ground of confidence in the result. For a mind not previously versed in the meaning and right use of the various kinds of words, to attempt the study of methods of philosophizing, would be as if some one should attempt to become an astronomical observer, having never learned to adjust the focal distance of his optical instruments so as to see distinctly.

It is almost as if John Stuart Mill knew that I might one day try to defend the value of focused philosophical investigation of words to an astrophysicist.  Just as an astronomer can be led into error by failing to appreciate the way their telescope works, inquirers in general, who think and reason through the medium of language, can be led into error by failing to appreciate the way that language works.  Now, I am not saying that every philosophical question about the workings of language or the meaning of ‘meaning’ is going to clarify inquiry in the same way that understanding the theory of optics helps someone know what their telescope is actually telling them, but I hope you can appreciate why philosophers would think that there are cases where it can and does help clarify inquiry.

So that was my all-too-brief sketch of how philosophy can help with scientific inquiry.  Of course, the far easier case to make for the value of philosophical inquiry is on the value side, rather than the inquiry side of the equation.  You care about raising public consciousness about science.  You believe that the developments of modern science have been a tremendous boon to humankind, and you think that the inquiry itself enriches those who engage in it.  Here are some philosophical questions prompted by all that: What determines whether some activity improves or enriches you?  What makes something a boon to humankind?  Why should you (or anyone) care about one outcome over another?  Or put more generally: which things in the world are fundamentally valuable, and worthy of pursuit?

The study of ethics or morality—inquiry into the nature of value—is a core area of philosophy, and has been since its inception.  And while scientific discoveries can reveal to us things like, how to build bridges, the methods for transplanting organs, or the psychological mechanisms of human persuasion, a practicing scientist implicitly takes stands on the normative questions of which bridges are worth building, which patients ought to get the organs that are in short supply, or which means of persuasion are morally permissible to use when trying to convince people of important truths.  I think these questions are worth asking, and I’m sure you do too.  My point isn’t that philosophers have all the answers to these questions, and so you should go ask them.  Rather, my point is that we’ve been asking these questions for a long time, and might have some insights on how you should go about trying to answer them.

As I said above, I’d love to talk more with you about the value of philosophical inquiry.


Lewis Powell



The brief examples I offered in this letter only scrape the surface of the enormous range of topics and approaches that go on in philosophy.  I didn’t mean to be providing a snapshot of the discipline, but just to point out a couple of aspects of philosophy that seemed especially relevant to Dr. Tyson’s remarks.



Dr. Tyson has responded to this letter in the comments below, directing those of us interested in his views on philosophy to view this exchange.

Young Philosophers Talk Series

I recently gave two talks for the Young Philosophers Series at SUNY Fredonia (or, more accurately, the “Philosophers who have been credentialed in the last six years or are about to be credentialed” Series).  One talk is intended to be an introductory talk presupposing no background, the other is a research oriented talk.  Here are the talks I gave:

Intro Talk: “Why Look to the Past: Historical Philosophy and the Virtue of Being Wrong”

Research Talk: “Adam Smith on Sympathy for the Deceased”

Having Views is Overrated

In this post, I am going to advocate for the position that having (first-order) philosophical views is overrated.  I am going to take for granted that philosophical inquiry involves the pursuit of knowledge and understanding.

There is a model of inquiry, which I think I remember being articulated by Robert Stalnaker, where we start with a figurative sack full of all the possibilities there are, and proceed by trying to empty the sack down to the single possibility that is actually the case.  This model is often described in terms of “locating” oneself in the space of possibilities, and progress in inquiry, on this model, is understood in terms of culling one’s options or ruling out possibilities.

Thinking about inquiry this way tends to suggest that our focus should be on the set of currently live options, and our strategy should be to seek out direct reasons to further narrow that set.  If the set includes rival possibilities A through Z, we should seek out a reason to exclude A from the set, or a reason to exclude B from the set, etc. and once we exclude A from the set, we are done with A: our focus will be on possibilities B through Z.  After all, if we were proceeding correctly in our attempts to winnow down possibilities, A is false.  Why waste our time thinking further about it?

I think that everything I am about to say is, strictly speaking, compatible with this model of inquiry itself.  That is, I don’t think that what I say will require us to jettison this model.  But, what I am going to say is not compatible with the “live option” focus that I just outlined as “suggested” by the model.  This is because I think the best chances for solid philosophical progress will involve rigorous focus on possibilities that are outside the live option set, as well as those within it.

One of the nicest side-effects of specializing in historical philosophy is the requirement that one spend a great deal of one’s time seeking out charitable and/or sympathetic readings of views that one would ordinarily be tempted to dismiss out of hand.  For me, this side-effect has so far been manifested most with respect to interpreting David Hume’s account of cognition and John Locke’s philosophy of language.  These are a pair of views that are routinely dismissed in contemporary discussions of those topics.  So, it is natural to ask why this would be a worthwhile endeavor.

Now, I want to be clear: it is definitely not that I harbor the secret hope that, for example, Hume’s theory of mind is actually correct.  This isn’t about thinking we removed possibilities from the sack prematurely.  Rather, it is that, more valuable than simply knowing that an option is to be culled is acquiring an understanding of why it is to be culled.  What is it that we need from a theory of X, that the culled theory can’t give us?  What features of the culled theory are preventing it from meeting that need?  What amendments or revisions to the theory would be sufficient to meet that need?

Now, David Hume’s theory of cognition is notorious for the sparsity of its resources.  The ambitions of Hume’s theory outstrip those resources to such a degree that it is entirely reasonable, prior to detailed investigation, to judge that Hume’s theory will obviously fall far short of its aims.  But, far from being a reason to dismiss Hume’s theory, this mismatch between ambitions and resources is precisely what makes Hume’s theory a promising target for inquiry. Or, at least, that is part of what I am hoping to argue in this post.

One way in which parsimony can be a theoretical virtue is this: simpler systems/theories are easier to investigate.  For example, if one proposes that all the variety of chemical interactions we observe can be explained by appeal to a single feature of the chemicals in question, which can take any of 3 values, it is far easier for us to exhaust the possibilities covered by such a theory than one which invokes 100 features, each of which can take any of 40 values.  Now antecedently, the former of those theories is far less likely to be right, but it is also far easier to learn about.  We will encounter problems with that theory far sooner, and we will be able to design experiments that could show the theory to be wrong more easily.

For example, Hume’s theory (from the Treatise) treats perceptual experience and cognition as being mental occurrences that are fundamentally of the same kind, and differing only with respect to their degree of “force and vivacity”.  This is a serious constraint on how Hume can attempt to account for the differing features of perceptual experience and cognition.  Which means we should be able to identify potential challenges for his view more easily, and find out which aspects of cognition that we’d like to have an account of Hume’s account is unable to satisfy.

Learning which challenges Hume’s account can’t meet will help us identify the minimal set of resources needed to render his account adequate.  Now, I’ve been writing as though the conditions of adequacy are somehow a given, or something we can take as granted.  But the point remains even if we defer settling that question as well.  What we learn is simply conditional:  If a theory of X needs to account for Y, then it needs to have such-and-such resources. Or, put another way: we learn claims about the consistency or inconsistency of different theses, rather than first order claims about topic X.

I think that learning such things can ground our interest in investigating theories, independent of our attitude towards their truth.  Investigation of Hume’s theory is instrumentally valuable for our ultimate goal of determining the correct theory of cognition.  It can teach us about the range of theories of cognition in relation to particular tasks that one might set out for a theory of cognition.

It is, of course, incidental that Hume is a historical figure.  Contemporary theories that one regards likely to be false can play this same role.  I have never found cognitivism about intention to be an appealing view, but it has a very nice relationship between its ambitions (to explain the norms of practical rationality) and it resources (to draw only on the norms of theoretical rationality).  To me, this means that we should expect to learn a great deal from investigating the view, irrespective of our attitude towards its truth.  In fact, the more skeptical one is about the view, the more they should expect investigations of the view to be informative about what work is really being done by the postulation of intention as a distinctively practical attitude.

Perhaps a briefer way to make my point is this:  One need not have first-order views to discover/produce philosophical positions worth investigating. Neither does one need to have first-order views in order to evaluate the success of those philosophical positions relative to specific aims.  For much of what needs to be done in the course of philosophical inquiry, then, one has no need for first-order views.

There is an interesting question about whether we can do as good of a job defending views that we don’t accept, but I will leave that for another post (to lay my cards on the table, though, I actually suspect that having first order views is, if anything, a hindrance to our capacity for sympathetic interpretation of the alternative positions).

“Hello”, “Ouch”, and “Snow is white”

I like the following three sentences as illustrating the sorts of different things we ideally want a complete philosophy of language to capture:

1) “Snow is white”

2) “Ouch!”

3) “Hello”

Sentence (1) is the sort of sentence that has received the most attention from philosophers.  To get a satisfactory account of that sentence, we’ll want something that not only assigns the sentence truth conditions, but which explains why the sentence has the truth conditions that it does.  Sentences (2) and (3) don’t have truth conditions, and yet, they are unquestionably part of language.  Even though sentence (2) and sentence (1) differ in terms of whether or not they have truth conditions, they seem to share a different feature: they can be used insincerely.  A complete account of language would ideally assign something we could call “sincerity conditions” to those sentences, and also explain why the sentences have the sincerity conditions they do.

What I find appealing in the family of approaches to linguistic theorizing labeled “expressivism” is that, in assigning mental states to sentences, they make something like sincerity conditions the foundation of their philosophy of language.  This is an interesting project because the resulting account is structured in a way that enables us to explain the truth-conditions of sentence (1) in terms of its sincerity conditions.  Sentence (1) expresses the belief that snow is white (and thus, to sincerely utter sentence (1), you need to believe that snow is white), and the belief in question has truth conditions.  Those truth-conditions are inherited by the sentence.  Sentence (2) expresses that one is in pain (and thus, to sincerely utter sentence (2), you need to be in pain).  Since being in pain does not have truth-conditions, there are no truth-conditions for the sentence to inherit.  We now have a nicely packaged explanation of truth-conditions in terms of sincerity conditions, and the account unifies sentences like (1) with sentences like (2) under the umbrella of a single approach to semantic theorizing.

Of course, sentence (3) does not seem amenable to this treatment, as it cannot be used insincerely.  It is true that if I say “hello” in a happy tone, I might lead you to believe I am pleased to see you, even in a case where I am not pleased to see you, and thus mislead you, but that does not make my “hello” insincere.  Why can’t I be insincere in saying “hello”?  It seems like this is because there is no distinctive mental state expressed by “Hello”.  While “ouch” is a display of pain, and “Snow is white” is a display of belief, “hello” is not a display.  Rather, in saying “hello”, I greet you.  So, this throws a wrench in the view that served to unify (1) and (2) so well.  Because that view can’t address (3).

So, if we want to preserve the nice explanation we had of sentences (1) and (2), we would need to embed them within a further account, which captures “hello” as well as “ouch” and “snow is white”.  Such an account (in order to preserve the structure of the explanation already on the table), would need to assign something to sentences such that, from those assignments, we could reconstruct the sincerity conditions of (1) and (2).

Now, the obvious account of what goes on with (3) is that it is used to greet someone.  So a natural theory is that competent speakers know that “hello” is used to greet someone.  Can we embed the expressivist account of “ouch” into this framework?  We’d have to say that competent speakers know that “ouch” is used to display pain.  And for sentence (1), we’d need to say that competent speakers know that (1) is used to display belief that snow is white.  But note that this is only a natural account of (1) if we are trying to preserve the account of (1)’s truth conditions we liked before.  If we were simply to ask what speakers know about (1), it is far more natural to say they know that (1) is used to claim that snow is white.  So, even though we technically can reconstruct the earlier account within this “knowledge of uses” approach, it is unclear that there is good theoretical basis for doing so.

I am left with a sort of uncertainty about how well we can informatively unify our treatments of (1), (2), and (3).  One might say, “ok, but ‘hello’ is sort of an outlier. Why not just give treat things like that as sui generis linguistic behaviors?”  This is where sentences like, “Go to the store” and “Is Thomas at home?” come in.  If we extend our picture in the way necessary to account for “hello”, we don’t need to assign individual mental states to questions and commands.  And this is good because it is hard to see what mental state would be a sincerity condition for commands and questions (though it is not impossible to make the case that such sentences have sincerity conditions).*

That is about enough rambling on this topic for now.  I will probably have some follow-up posts soon.

*For what it is worth, I am inclined to think that commands cannot be insincere, and lean that way for questions, but am less confident about the latter (cases that spring to mind as insincere commands seem, to me, more accurately described as “reluctant commands”.)

Lack of Posting

Apologies for my lack of recent posting.  The end of this past semester was keeping me busy, especially as I have been trying to wrap things up in the Detroit area, in anticipation of my move to Buffalo at the end of this month.  I will get back to my Monday Mill Blogging soon (maybe even tomorrow), and will also get back to posting at The Mod Squad soon as well.

What to Keep in Mind about Refereeing Work

This post is motivated by some of my concerns relating to suggestions about unblinding the referees (as discussed here).  While I agree that there are problems with the current state of the peer review system, I tend to disagree strongly with some proposed resolutions.

During my last year as a graduate student at USC, I was managing editor of the Pacific Philosophical Quarterly.  I’ve submitted my own work to journals, and, in addition, I’ve served as a referee for a few different journals.

I learned a number of things from being “behind the scenes” at PPQ.  For instance, I learned how much of a manuscript’s progress through the review system depends on factors extrinsic to the manuscript.  If there are two equally appropriate people to ask to referee a given paper, who differ on their threshold for “reject” vs. “revise and resubmit”, that could make the difference for what happens to your paper.  It isn’t personal. I didn’t have a scorecard for how “easy” different referees were. It was just the luck of the draw.

I also learned that some of the biggest issues for the peer review system come from the fact that it is treated like a volunteer gig.

Why are so many journals so slow about getting decisions on papers?  There are several possible bottleneck points for journals, but the two biggest, in my experience are: 1) getting potential referees to agree to review a paper, and 2) getting referees who have agreed to review a paper to meet deadlines.

While getting bad comments is frustrating, and can often make a rejection feel particularly unfair, I honestly think that such issues would be vastly less important if all manuscripts were issued decisions promptly (sometimes I joke that it would be fine for most journals to simply provide an efficient distribution of injustice).

So, why do we get such bottlenecks in securing referees and in getting feedback from referees?  Here, I think the answer is blindingly obvious: It is hard to get referees, and hard to get them to prioritize refereeing work, because the profession does not treat such work as valuable.  Combine this with the fact that we all have more work to do than we have time to do it, and that refereeing a paper can often be more tedious than it is exciting, and it should not be a mystery why it is hard to get quality refereeing done in a timely manner.

You know how I can tell the profession doesn’t treat refereeing work as valuable?  There are no real incentives to do such work.  Neither do we entice people to do it with rewards, nor do we punish people for failure to do it.  Since one can only referee as much as one is asked to referee, I’d be inclined to favor positive incentives for people who do referee, rather than penalties for people who don’t, but the basic point is just this: If refereeing work really is an invaluable service to the profession, why aren’t we putting any professional value on its performance?

I’d be curious to know whether any university treats this sort of service to the profession as a serious factor in tenure or promotion. Where by “serious” I mean something more than listing it in the official description of tenure factors.  What I mean is: Does it genuinely count in your favor for T&P to be a good professional citizen? Does it genuinely count against you if you are not?

Maybe we can’t unilaterally change the way university administrations value things for Tenure and Promotion.  I don’t want to throw in the towel on that yet, but it isn’t the only option.

Journals (as an extension of the Publishing Houses) are also dropping the ball on this.  What does someone get for putting in the time and effort to do good review work?  Basically: pride in a job well done.  I know that some publishers compensate people who referee book manuscripts.  This seems like good practice, since refereeing book manuscripts is at least an order of magnitude more of a burden than refereeing a journal article.  But, and this is the important thing: refereeing journal articles is still a burden!

Why on earth are we running things in a way where journal editors are put in the position of sending messages that might as well say this:

Dear Very Busy Academic,

I have a bit of time-consuming work that I need someone to do, and I think you have the right knowledge and skill-set to perform it.  While I know we are all very busy with teaching, researching, mentoring students (and if we have any time or energy left over, perhaps some of it should be reserved for a life outside of work), I am hopeful that you recognize how the peer review journal process is held together with duct-tape and dreams, and will voluntarily pitch in to help out.  In thanks when you finish, I will even send you a form e-mail that says “thanks” (what more gratitude could you hope to receive?)

If you don’t have the time, can you recommend a few other people who are more easily motivated to provide valuable services for no compensation?



PS: If you do a good job, you will no doubt become the first person I think of for any similar requests in the future. Yay!


Monday Mill Blogging (#009)

I’ve missed a couple of Mondays, but today: Monday Mill Blogging is back.

Today’s post is the second that will cover book 1, chapter 2, section 5.

§ 5.  Connotative and Non-Connotative Names

Let’s just start with a quote:

Proper names are not connotative: they denote the individuals who are called by them; but they do not indicate or imply any attributes as belonging to those individuals. When we name a child by the name Paul, or a dog by the name Caesar, these names are simply marks used to enable those individuals to be made subjects of discourse. It may be said, indeed, that we must have had some reason for giving them those names, rather than any others; and this is true; but the name, once given, is independent of the reason.  A man may have been named John, because that was the name of his father; a town may have been named Dartmouth, because it is situated at the mouth of the Dart. But it is no part of the signification of the word John, that the father of the person so called bore the same name; nor even of the word Dartmouth, to be situated at the mouth of the Dart.  If sand should choke up the mouth of the river, or an earthquake change its course, and remove it to a distance from the town, the name of the town would not necessarily be changed.  That fact, therefore, can form no part of the signification of the word; for otherwise, when the fact confessedly ceased to be true, no one would any longer think of applying the name. Proper names are attached to the objects themselves, and are not dependent on the continuance of any attribute of the object. (p. 33)

Charitably understood, the structure of this passage is this: Mill asserts a view about proper names, considers a possible objection—the objection that the reason for giving one name rather than another imbues the name with that reason as additional significance beyond its denotation—and gives reasons for dismissing that objection (an uncharitable understanding would be one that requires us to reconstruct a compelling argument in favor of Mill’s view of proper names from his response to this objection).

If we distinguish between the reason for assigning a name, and the reason a name applies to an individual, we can frame the point this way:  Mill’s position is that no attribute makes its way in to the application conditions for a name like “John” or “Dartmouth”.  The objection raises a worry based on the fact that there needs to be some reason behind the assignment of names, and Mill’s reply is to argue that, even granting some reason for the assignment of the name, it seems clear that the attributes which ground the assignment do not establish themselves as conditions of application.

One might be tempted to analogize this to Kripke’s distinction between reference-fixing descriptivism and meaning-giving descriptivism, but I think that might be a bit too quick.  To be sure, I can see why it might be thought a parallel, but it would be hasty to suggest that this is the best way of understanding Mill’s position.

In the next paragraph, Mill mentions the terms “God” (in the mouth of a monotheist) and “The Sun” as instances of connotative terms that might incidentally be indiviudal, but are linguistically general.  Mill points out that we can imagine a situation in which there are many suns, and that “the majority of mankind have believed, and still believe, that there are many gods” (p. 33).  Mill wants to set these aside, as he thinks they are general names which (in some sense) merely happen to name only one entity.  This is introduced to distinguish it from “real instances of individual connotative names”.  His examples include: “The only son of John Stiles”, “the first emperor of Rome”, “the father of Socrates”, “the author of the Illiad”, and “the murderer of Henri Quatre”.  Now, for some of these, color me puzzled about why they are getting a different treatment from “The Sun” or “God”.  For others, it is much easier to see why they linguistically require the uniqueness of the entity they name (in a way above and beyond that required by “the Sun”).

Mill explains that while it is possible that multiple people jointly authored the Illiad, the presence of the word “the” renders the name individual:

For though it is conceivable that more persons than one might have participated in the authorship of the Illiad, or in the murder of Henri Quatre, the employment of the article the implies that, in fact, this was not the case.  What is here done by the word the, is done in other cases by context: thus, “Caesar’s army” is an individual name, if it appears from the context that the army meant is that which Caesar commanded in a particular battle. The still more general expressions “The Roman army,” or “the Christian army,” may be individualized in a similar manner. (p. 34)

This treatment of incomplete descriptions is especially interesting, as it illustrates a sensitivity on Mill’s part to the importance of context.  The story appears to be that there are many different armies to which the name “Roman army” applies, however the use of the term “the” in conjunction with contextual factors, determines which of those specific armies the phrase operates as an individual name of on an occasion of use.

Mill next relates part of the story of Ali Baba and the Forty Thieves:

If, like the robber in the Arabian Nights, we make a mark with chalk on a house to enable us to know it again, the mark has a purpose, but it has not properly any meaning.  The chalk does not declare anything about the house; it does not mean, This is such a person’s house, or This is a house which contains booty. The object of making the mark is merely distinction. I say to myself, All these houses are so nearly alike that if I lose sight of them, I shall not again be able to distinguish that which I am now looking at, from any of the others; I must hterefore contrive to make the appearance of this one house unlike that of the others, that I may hereafter know when I see the mark—not indeed any attribute of the house—but simply that it is the same house which I am now looking at.  Morgiana chalked all the other houses in a similar manner, and defeated the scheme: how? simply by obliterating the difference of appearance between that house and the others. The chalk was still there, but it no longer served the purpose of a distinctive mark.

When we impose a proper name, we perform an operation in some degree analogous to what the robber intended in chalking the house. We put a mark, not indeed upon the object itself, but, so to speak, upon the idea of the object. A proper name is but an unmeaning mark which we connect in our minds with the idea of the object, in order that whenever the mark meets our eyes or occurs to our thoughts, we may think of that individual object. Not being attached to the thing itself, it does not, like the chalk, enable us to distinguish the object when we see it: but it enables us to distinguish it when it is spoken of, either in the records of our own experience, or in the discourse of others; to know that what we find asserted in any proposition of which it is the subject, is asserted of the individual thing with which we were previously acquainted. (p. 35)

I think I want to agree with Mill that the chalk mark on the house “does not declare anything about the house.”  It is true that one could devise a language of chalk symbols, in which different chalk marks were used to indicate different qualities.  But note that in such a language, the chalk symbols would be functioning like predicates (with their physical locations determining the subject of the proposition).  But I want to stress something crucial about Mill’s use of the analogy here: if Mill had not so steadfastly insisted that names signify objects rather than ideas, this doctrine of mere denotation would be harder to make sense of.  Note that Mill thinks the term is “connect[ed] in our minds with the idea of the object”.  Since our idea of the object likely includes a variety of attributes we take the object to have, the proponent of the view that terms signify ideas (e.g. Locke) would have no reason to suggest that the name lacks meaning.  It might well be that the meaning is not robustly public (as my idea of Dartmouth may not be the same as your idea of Dartmouth), but the term would signify a somewhat detailed idea.  Because Mill is committed to cashing out the relationship between the term and the object, and because no particular attribution of quality to Dartmouth is inherent in my calling Dartmouth “Dartmouth”, Mill can set aside the various qualities built in to my idea of Dartmouth as linguistically irrelevant.

I might have more to say about this analogy at a later time, but for now, I am going to pause again, and return next Monday (hopefully) to continue working my way through 1.2.5.

Next time on Monday Mill Blogging: §5, “Connotative and Non-Connotative Names” (continued)