2.33 Promises and Imperatives
Category: Ethics
Keywords: motive, imperatives, motives, deontic, acts, promise, promising, actions, act, action, performed, perform, acting, promises, imperative
Number of Articles: 307
Percentage of Total: 1%
Rank: 49th
Weighted Number of Articles: 412.6
Percentage of Total: 1.3%
Rank: 24th
Mean Publication Year: 1967.1
Weighted Mean Publication Year: 1968.9
Median Publication Year: 1967
Modal Publication Year: 1966
Topic with Most Overlap: Ordinary Language (0.0706)
Topic this Overlaps Most With: Intention (0.0516)
Topic with Least Overlap: Races and DNA (0.00037)
Topic this Overlaps Least With: Vagueness (0.00045)
Comments
This, like a few other ethics topics, is a little strange. There is enough unity in ethics that the model had a slightly hard time finding natural joints to carve at.
Looking at the graph over time, this looks like a very temporally specific graph. That peak in the early-to-mid-1960s followed by a dramatic fall looks like a topic that simply burned out. But look more closely, and one see that it doesn’t fall nearly to zero. And several of the characteristic articles are from recent years.
What’s happened here, I think, is that the model has stumbled on to a somewhat disjunctive topic. This topic makes more sense if thought of as one part deontic logic, and one part promises. The deontic logic part has a really sharp peak; there isn’t much work on that either before or after that peak around 1966 and 1967. This is just about the last topic to become fashionable before the boom in the late-1960s where lots of things became fashionable and stayed so more or less ever since. But the promises part is not nearly as uneven, and most of the recent work is on promises. I suspect that in the long run that work will end up being more relevant to the citations than the deontic logic work, but it’s hard for new articles to have huge citation counts.
Did it make sense for the model to throw together deontic logic and promises? Not really; it had to make some arbitrary choices and it made this one. We’re about to get a run of topics where the model seems to have settled for disjunctive topics. In this one at least it picked two disjuncts from the same subdiscipline. We won’t always be so lucky.