Tag Archives: knowledge

Invincible ignorance

“Invincible ignorance” refers to a state of ignorance that cannot be overcome because the individual has no way of accessing or understanding the necessary information. This concept is often discussed in moral and ethical contexts, particularly in philosophy and theology.

In these contexts, invincible ignorance is the lack of knowledge that is literally impossible for a person to obtain. This could be due to various factors such as cultural, geographical, or temporal barriers. For example, someone living in a remote part of the world without access to certain information cannot be blamed for not knowing it.

In moral theology, especially within the Catholic Church, the concept of invincible ignorance plays a significant role. It is believed that if a person is invincibly ignorant of the moral wrongness of an act, then their culpability for that act is diminished or even nullified. This is because moral responsibility is often linked to the knowledge and intent behind an action.

However, it’s important to distinguish invincible ignorance from “vincible ignorance,” which is ignorance that can be overcome but isn’t due to the individual’s lack of effort or willful avoidance of the truth. In moral discussions, vincible ignorance does not typically absolve an individual from responsibility in the same way invincible ignorance might.

Leave a comment

Filed under Logical fallacies

On Gettier Problems

by Tim Harding

Gettier problems or cases are named in honor of the American philosopher Edmund Gettier, who discovered them in 1963. They function as challenges to the philosophical tradition of defining knowledge as justified true belief . The problems are actual or possible situations in which someone has a belief that is both true and well supported by evidence, yet which fails to be knowledge (Hetherington 2017:1).

The traditional ‘justified true belief’ (JTB) account of knowledge is comprised of three conditions as follows: S knows P if and only if (i) P is true, (ii) S believes that P is true, and (iii) S is justified in believing that P is true. In his discussion of this account of knowledge, Gettier (1963:192) begins by noting two points.  His first point is that it is possible for a person to be justified in believing a proposition which is in fact false (for which he later gives examples).  His second point is that if a person is justified in believing any proposition P, and that proposition P entails another proposition Q, and that if the person accepts that Q is deduced from P, then the person is justified in believing Q.

Gettier (1963: 192-193) provides two counterexamples to show that it is possible meet these three JTB conditions and yet not know P.  I think that his second counterexample demonstrates both of his two opening points better than his first counterexample.  The proposition (f) ‘Jones owns a Ford’ entails the disjunctive proposition (h) ‘Either Jones owns a Ford or Brown is in Barcelona’.  In accordance with Gettier’s first opening point, Smith is justified in believing (f) even if it is false, because Smith did not know that Jones was lying about his ownership of the Ford. Thus in accordance with Gettier’s second opening point, if Smith is justified in believing (f), he is justified in believing (h).  So if (f) is false, (h) could still be true by chance, if unbeknown to Smith Brown just happens to be in Barcelona.  So Smith was justified in believing (h) yet he did not know (h).  Yet proposition (h) meets each of the three JTB conditions.  So I think that this counterexample shows that Gettier’s two opening points are both plausible.

Zagzebski (1994: 207) notes that Gettier problems arise ‘when it is only by chance that a justified true belief is true’, as in the case of Brown happening to be in Barcelona in the Gettier counterexample discussed above. She argues that ‘since justification does not guarantee truth, it is possible for there to be a break in the connection between justification and truth, but for that connection to be regained by chance’ (Zagzebski 1994: 207).  Gettier’s counterexample created a problem for ‘justified true belief’ because an accident of bad luck (Jones lying about owning a Ford) was cancelled out by an accident of good luck (Brown happening to be in Barcelona), thus preserving both the truth of the disjunction (h) ‘Either Jones owns a Ford or Brown is in Barcelona’ and Smith’s justification for believing the truth of (h).

I think this break in the connection between justification and truth is what Zagzebski (1994: 209) means when she later refers to the concept of knowledge closely connecting the justification and the truth component of a given belief, but permitting some degree of independence between them.  In a later essay (1999: 101), Zagzebski explains that ‘Gettier problems arise for any definition in which knowledge is true belief plus something else that is closely connected with the truth but does not entail it’.  She argues that all that is necessary is that there be a small gap or independence the between truth and justification components of knowledge (Zagzebski (1999: 101), as shown in Gettier’s abovementioned counterexample.  It follows that Gettier problems can be avoided if there is no degree of independence at all between the truth and the justification of a belief (Zagzebski (1994: 211).

Zagzebski (1994: 209-210) describes a general rule for generating Gettier cases. As long as there is a small degree of independence referred to in (ii) above, we can construct Gettier cases by the following procedure.  We start with a case of justified false belief, where the falsity of the belief is due to some element of luck (such as Jones lying about owning a Ford).  Now amend the case by adding another element of luck (such as Brown happening to be in Barcelona) which makes the belief (in this case a disjunction) true after all.  So the ‘belief’ that Zagzebski is referring to here is any justified false belief where the falsity is by chance.

References

Gettier, E., (1963) ‘Is Justified True Belief Knowledge’ in Sosa, E., Kim, J., Fantl, J., and McGrath. M. Epistemology : An Anthology 2nd edition. Carlton, Blackwell. 192-193.

Hetherington, S., ‘Gettier Problems’, The Internet Encyclopedia of Philosophy, ISSN 2161-0002, http://www.iep.utm.edu/gettier/, 29 October 2017.

Zagzebski, L., (1994) ‘The Inescapability of Gettier Problems’ in Sosa, E., Kim, J., Fantl, J., and McGrath. M. Epistemology : An Anthology 2nd edition. Carlton, Blackwell. 207-212.

Zagzebski, L., (1999) ‘What is Knowledge?’ in Greco, J. and Sosa, E., The Blackwell Guide to Epistemology. Carlton, Blackwell. 92-116.

 

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Leave a comment

Filed under Reblogs

Book review: The Death of Expertise

The Conversation

File 20170619 28805 17lbqbv
A new book expresses concern that the ‘average American’ has base knowledge so low that it is now plummeting to ‘aggressively wrong’.
shutterstock

Rod Lamberts, Australian National University

I have to start this review with a confession: I wanted to like this book from the moment I read the title. And I did. Tom Nichols’ The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters is a motivating – if at times slightly depressing – read.

In the author’s words, his goal is to examine:

… the relationship between experts and citizens in a democracy, why that relationship is collapsing, and what all of us, citizens and experts, might do about it.

This resonates strongly with what I see playing out around the world almost every day – from the appalling state of energy politics in Australia, to the frankly bizarre condition of public debate on just about anything in the US and the UK.

Nichols’ focus is on the US, but the parallels with similar nations are myriad. He expresses a deep concern that “the average American” has base knowledge so low it has crashed through the floor of “uninformed”, passed “misinformed” on the way down, and is now plummeting to “aggressively wrong”. And this is playing out against a backdrop in which people don’t just believe “dumb things”, but actively resist any new information that might threaten these beliefs.

He doesn’t claim this situation is new, per se – just that it seems to be accelerating, and proliferating, at eye-watering speed.

Intimately entwined with this, Nichols mourns the decay of our ability to have constructive, positive public debate. He reminds us that we are increasingly in a world where disagreement is seen as a personal insult. A world where argument means conflict rather than debate, and ad hominem is the rule rather than the exception.

Again, this is not necessarily a new issue – but it is certainly a growing one.

Oxford University Press

The book covers a broad and interconnected range of topics related to its key subject matter. It considers the contrast between experts and citizens, and highlights how the antagonism between these roles has been both caused and exacerbated by the exhausting and often insult-laden nature of what passes for public conversations.

Nichols also reflects on changes in the mediating influence of journalism on the relationship between experts and “citizens”. He reminds us of the ubiquity of Google and its role in reinforcing the conflation of information, knowledge and experience.

His chapter on the contribution of higher education to the ailing relationship between experts and citizens particularly appeals to me as an academic. Two of his points here exemplify academia’s complicity in diminishing this relationship.

Nichols outlines his concern about the movement to treat students as clients, and the consequent over-reliance on the efficacy and relevance of student assessment of their professors. While not against “limited assessment”, he believes:

Evaluating teachers creates a habit of mind in which the layperson becomes accustomed to judging the expert, despite being in an obvious position of having inferior knowledge of the subject material.

Nichols also asserts this student-as-customer approach to universities is accompanied by an implicit, and also explicit, nurturing of the idea that:

Emotion is an unassailable defence against expertise, a moat of anger and resentment in which reason and knowledge quickly drown. And when students learn that emotion trumps everything else, it is a lesson they will take with them for the rest of their lives.

The pervasive attacks on experts as “elitists” in US public discourse receive little sympathy in this book (nor should these). Nichols sees these assaults as entrenched not so much in ignorance, more as being rooted in:

… unfounded arrogance, the outrage of an increasingly narcissistic culture that cannot endure even the slightest hint of inequality of any kind.

Linked to this, he sees a confusion in the minds of many between basic notions of democracy in general, and the relationship between expertise and democracy in particular.

Democracy is, Nichols reminds us, “a condition of political equality”: one person, one vote, all of us equal in the eyes of the law. But in the US at least, he feels people:

… now think of democracy as a state of actual equality, in which every opinion is a good as any other on almost any subject under the sun. Feelings are more important than facts: if people think vaccines are harmful … then it is “undemocratic” and “elitist” to contradict them.

The danger, as he puts it, is that a temptation exists in democratic societies to become caught up in “resentful insistence on equality”, which can turn into “oppressive ignorance” if left unchecked. I find it hard to argue with him.

Nichols acknowledges that his arguments expose him to the very real danger of looking like yet another pontificating academic, bemoaning the dumbing down of society. It’s a practice common among many in academia, and one that is often code for our real complaint: that people won’t just respect our authority.

There are certainly places where a superficial reader would be tempted to accuse him of this. But to them I suggest taking more time to consider more closely the contexts in which he presents his arguments.

This book does not simply point the finger at “society” or “citizens”: there is plenty of critique of, and advice for, experts. Among many suggestions, Nichols offers four explicit recommendations.

  • The first is that experts should strive to be more humble.
  • Second, be ecumenical – and by this Nichols means experts should vary their information sources, especially where politics is concerned, and not fall into the same echo chamber that many others inhabit.
  • Three, be less cynical. Here he counsels against assuming people are intentionally lying, misleading or wilfully trying to cause harm with assertions and claims that clearly go against solid evidence.
  • Finally, he cautions us all to be more discriminating – to check sources scrupulously for veracity and for political motivations.

In essence, this last point admonishes experts to mindfully counteract the potent lure of confirmation bias that plagues us all.

It would be very easy for critics to cherry-pick elements of this book and present them out of context, to see Nichols as motivated by a desire to feather his own nest and reinforce his professional standing: in short, to accuse him of being an elitist. Sadly, this would be a prime example of exactly what he is decrying.

To these people, I say: read the whole book first. If it makes you uncomfortable, or even angry, consider why.

Have a conversation about it and formulate a coherent argument to refute the positions with which you disagree. Try to resist the urge to dismiss it out of hand or attack the author himself.

I fear, though, that as is common with a treatise like this, the people who might most benefit are the least likely to read it. And if they do, they will take umbrage at the minutiae, and then dismiss or attack it.

The ConversationUnfortunately we haven’t worked how to change that. But to those so inclined, reading this book should have you nodding along, comforted at least that you are not alone in your concern that the role of expertise is in peril.

Rod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

Facts are not always more important than opinions: here’s why

The Conversation

Image 20170412 615 1uec762
The message over the doorway to London’s Kirkaldy Testing Museum. But don’t be too quick to believe the facts and dismiss the opinions. Flickr/Kevo Thomson, CC BY-NC-ND

Peter Ellerton, The University of Queensland

Which is more important, a fact or an opinion on any given subject? It might be tempting to say the fact. But not so fast… The Conversation

Lately, we find ourselves lamenting the post-truth world, in which facts seem no more important than opinions, and sometimes less so.

We also tend to see this as a recent devaluation of knowledge. But this is a phenomenon with a long history.

As the science fiction writer Issac Asimov wrote in 1980:

Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge”.

The view that opinions can be more important than facts need not mean the same thing as the devaluing of knowledge. It’s always been the case that in certain situations opinions have been more important than facts, and this is a good thing. Let me explain.

Not all facts are true

To call something a fact is, presumably, to make a claim that it is true. This isn’t a problem for many things, although defending such a claim can be harder than you think.

What we think are facts – that is, those things we think are true – can end up being wrong despite our most honest commitment to genuine inquiry.

For example, is red wine good or bad for you? And was there a dinosaur called the brontosaurus or not? The Harvard researcher Samuel Arbesman points out these examples and others of how facts change in his book The Half Life of Facts.

It’s not only that facts can change that is a problem. While we might be happy to consider it a fact that Earth is spherical, we would be wrong to do so because it’s actually a bit pear-shaped. Thinking it a sphere, however, is very different from thinking it to be flat.

Asimov expressed this beautifully in his essay The Relativity of Wrong. For Asimov, the person who thinks Earth is a sphere is wrong, and so is the person who thinks the Earth is flat. But the person who thinks that they are equally wrong is more wrong than both.

Geometrical hair-splitting aside, calling something a fact is therefore not a proclamation of infallibility. It is usually used to represent the best knowledge we have at any given time.

It’s also not the knockout blow we might hope for in an argument. Saying something is a fact by itself does nothing to convince someone who doesn’t agree with you. Unaccompanied by any warrant for belief, it is not a technique of persuasion. Proof by volume and repetition – repeatedly yelling “but it’s a fact!” – simply doesn’t work. Or at least it shouldn’t.

Matters of fact and opinion

Then again, calling something an opinion need not mean an escape to the fairyland of wishful thinking. This too is not a knockout attack in an argument. If we think of an opinion as one person’s view on a subject, then many opinions can be solid.

For example, it’s my opinion that science gives us a powerful narrative to help understand our place in the Universe, at least as much as any religious perspective does. It’s not an empirical fact that science does so, but it works for me.

But we can be much clearer in our meaning if we separate things into matters of fact and matters of opinion.

Matters of fact are confined to empirical claims, such as what the boiling point of a substance is, whether lead is denser than water, or whether the planet is warming.

Matters of opinion are non-empirical claims, and include questions of value and of personal preference such as whether it’s ok to eat animals, and whether vanilla ice cream is better than chocolate. Ethics is an exemplar of a system in which matters of fact cannot by themselves decide courses of action.

Matters of opinion can be informed by matters of fact (for example, finding out that animals can suffer may influence whether I choose to eat them), but ultimately they are not answered by matters of fact (why is it relevant if they can suffer?).

Backing up the facts and opinions

Opinions are not just pale shadows of facts; they are judgements and conclusions. They can be the result of careful and sophisticated deliberation in areas for which empirical investigation is inadequate or ill-suited.

While it’s nice to think of the world so neatly divided into matters of fact and matters of opinion, it’s not always so clinical in its precision. For example, it is a fact that I prefer vanilla ice cream over chocolate. In other words, it is apparently a matter of fact that I am having a subjective experience.

But we can heal that potential rift by further restricting matters of fact to those things that can be verified by others.

While it’s true that my ice cream preference could be experimentally indicated by observing my behaviour and interviewing me, it cannot be independently verified by others beyond doubt. I could be faking it.

But we can all agree in principle on whether the atmosphere contains more nitrogen or carbon dioxide because we can share the methodology of inquiry that gives us the answer. We can also agree on matters of value if the case for a particular view is rationally persuasive.

Facts and opinions need not be positioned in opposition to each other, as they have complementary functions in our decision-making. In a rational framework, they are equally useful. But that’s just my opinion – it’s not a fact.

Peter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Leave a comment

Filed under Reblogs

Vice Chancellor Barney Glover says universities must stand up for facts and the truth – ‘if we don’t, who will?’

The Conversation

Image 20170228 29915 abvjkd
Intellectual inquiry and expertise are under sustained attack, says Barney Glover.
Mick Tsikas/AAP

Barney Glover, Western Sydney University

This is an edited extract from a speech made by Vice Chancellor Barney Glover at the National Press Club on 1 March, 2017. The Conversation


We live in challenging times. Ours is an era in which evidence, intellectual inquiry and expertise are under sustained attack.

The phrases “post truth” and “alternative facts” have slipped into common use. Agendas have displaced analysis in much of our public debate. And we are all the poorer for it.

I want to deliver a passionate defence of the value of expertise and evidence. I will mount a case for facts as they are grounded in evidence, not as fluid points of convenience employed to cover or distort a proposition.

My plea to you all is this: let’s not deride experts, nor the value of expertise. Because in an era where extremists and polemicists seek to claim more and more of the public square, our need for unbiased, well-researched information has seldom been greater.

We must remind ourselves of how human progress has ever been forged. In this, academics and journalists have common cause. For how are we to fulfill our respective roles in a democracy if we don’t defend the indispensible role of evidence in decision-making?

Hostility towards evidence and expertise

In Australia and around the world, we’ve seen the emergence of a creeping cynicism – even outright hostility – towards evidence and expertise.

We saw this sentiment in the post-Brexit declaration by British Conservative MP, Michael Gove that “the people of this country have had enough of experts.”

And yet – as we strive to cure cancer; save lives from preventable disease; navigate disruption; lift living standards; overcome prejudice, and prevent catastrophic climate change – expertise has never been more important.

The turn that public debate has taken is a challenge to universities. As institutions for the public good, we exist to push the frontiers of knowledge. We enhance human understanding through methodical, collaborative, sustained and robust inquiry.

That doesn’t discount the wisdom of the layperson. And it doesn’t mean universities have all the answers. Far from it. But we are unequivocally the best places to posit the questions.

We are places structurally, intellectually, ethically and intrinsically premised on confronting society’s most complex and confounding problems. We are at the vanguard of specialist knowledge. And we are relentless in its pursuit. We have to be. Because – like the challenges we as institutions immerse ourselves in – the pace of change is unrelenting.

In universities, questioning is continuous, and answers are always provisional. The intensive specialisation, in-depth inquiry and measured analysis universities undertake is not carried-out in service of some ulterior motive or finite agenda.

In the conduct of research the finish-line is very rarely, if ever reached. There’s always more to learn, more to discover. The core objectives universities pursue can never be about any other agenda than the truth. There is no other, nor greater reward. So let’s not disparage expertise, or the critically important role of evidence and intellectual inquiry.

Instead, let’s try to understand its value to our country and its people. And, indeed, to the world.

Universities perform an essential role in society. We must stand up for evidence. Stand up for facts. Stand up for the truth. Because if we don’t, who will?

Universities’ role in the economy

Disruption is drastically refashioning the economy. It is reshaping the way we work, and reimagining the way we engage with each other in our local communities and globally.

In this constantly transforming environment – where major structural shifts in the economy can profoundly dislocate large segments of society – our universities perform a pivotal role.

Universities help us make the very best of disruption, ensuring we are able to “ride the wave”. And they are the institutions best equipped to buffer us against the fallout. This is particularly important in regions that have relied for decades on large-scale blue-collar industries.

Think Geelong in regional Victoria and Mackay in central Queensland. Look to Elizabeth in the northern suburbs of Adelaide. Wollongong and Newcastle in New South Wales. And Launceston in Tasmania. Onetime manufacturing strongholds in carmaking, steel, timber and sugar.

These communities have been wrenched economically, socially and at the personal level by automation, offshoring and rationalisation. For places like these, universities can be a lifeline.

Internationally, the evidence is in. Former financier, Antoine van Agtmael and journalist, Fred Bakker look at this very scenario in their recent book, “The Smartest Places on Earth”.

They uncover a transformative pattern in more than 45 formerly struggling regional US and European economies; places they describe as “rustbelts” turned “brainbelts”.

Akron, Ohio is one of the most remarkable examples they cite. This midwestern city had four tyre companies disappear practically overnight. The then president of the University of Akron, Luis Proenza, reached out to those affected, rallying them to collaborate and encouraging them to transform.

Van Agtmael tells the story of what happened next. “What stayed in Akron”, he observes, “was the world class polymer research that has given us things like contact lenses that change colour if you have diabetes, tyres that can drive under all kinds of road conditions and hundreds more inventions.”

Akron, he continues, “now [has] 1,000 little polymer companies that have more people working for them than the four old tyre companies.”

This kind of transformation, at Akron and beyond, Van Agtmael remarks, is “university centric.”

“Each of these rustbelts becoming brain belts”, he concludes, “always have universities.” In places like those he describes, and many others around the world, universities and their graduates are leading vital processes of renewal within economies experiencing upheaval.

You may be surprised by the extent that this is happening in Australia, too.

Four-in-five startup founders are uni graduates

University graduates key to boosting startup economy. From http://www.shutterstock.com

Over the past decade, the startup economy has become part of Australia’s strategy for economic diversification and growth. Yet what has not been widely understood is the extent to which universities and their graduates are responsible for that growth.

Now, for the first time, Universities Australia and the survey group Startup Muster have taken a closer look at the data.

“Startup Smarts: universities and the startup economy”, confirms that universities and their graduates are the driving force in Australia’s startup economy.

It tells us that four-in-five startup founders in this country are university graduates. Many startups, too, have been nurtured into existence by a university incubator, accelerator, mentoring scheme or entrepreneurship course.

There are more than one-hundred of these programs dispersed widely across the country, with many on regional campuses.

They provide support, physical space and direct access to the latest research. They help to grow great Australian ideas into great Australian businesses.

This report confirms just how important the constant evolution, renewal and refining of course offerings at universities is.

We need to ensure that our programs equip our students and graduates for an uncertain future.

By the time today’s kindergarten students finish high school and are considering university study, startups will have created over half-a-million new jobs across the country. And this new sector of the economy – a sector indivisible from our universities – raised $568 million in 2016; 73% more than the previous year.

By the very nature of the reach of our universities, the benefits are not confined to our cities. We play a vital role to help regional Australians and farmers stake their claim in the startup economy too. The idea of the “silicon paddock”’ – using technology to take farm-based businesses to the markets of the world – is no longer a concept. It’s a reality.

Technology enables our regional entrepreneurs to stay in our regions; building and running businesses, investing locally without the need for long commutes or city relocations. And this, too, is very important; making sure nobody is left behind.

Extending knowledge beyond uni gates

Comprehending and overcoming the complex problems the world confronts, in my view, requires we defend the role of expertise and intellectual inquiry. That doesn’t mean universities are the last word on knowledge. To a large extent, it means rethinking the way knowledge is conveyed beyond university gates.

If universities don’t turn their minds to this issue, others will. And their motivations may not always be altruistic.

Take research, for instance. When the facts of a particular field of inquiry are under attack, the natural reaction among researchers might be to tighten-up their retort and hone the theoretical armory.

It is right to be rigorous and methodical in research. But in the broader communication of our research – in the public dialogue beyond “the lab” – I think universities have to guard against retreating to overly technical language that, perhaps inadvertently, sidelines all but a limited group of specialists

I don’t suggest that research can’t benefit or even be improved via a researcher’s consciousness of a particular, often very specific audience. Yet researchers who allow this consciousness to dominate the development of their work risk undermining their ability to tread new ground and challenge existing frontiers of knowledge.

Only by crossing borders can we come to something new. How many researchers’ discoveries have arisen from a subversion of discipline, practice or establishment? Virtually all, I would suggest.

Breaking down structural boundaries

Crossing borders also means we push other structural boundaries. Within universities, distinct discipline paradigms exist for good reason. They bring focus and in-depth intellectual lineage to a particular field.

But, increasingly, the complex problems we set out to solve don’t abide by the same boundaries. These questions demand expertise from many disciplines, working together and approaching the subject matter from different angles.

That is why universities are constantly refining their research and teaching programs and, increasingly, diffusing the borders that kept many of them separate. This is good for universities. It is good for the country. And it is good for our students, many of whom find their way into public service or politics.

These graduates bring a greater understanding of all facets of the complex questions they confront throughout their working lives.

Interdisciplinarity is, I think, a powerful antidote against ideological intransigence and prejudice. Australian universities – particularly in their research – have a growing track-record in this regard.

Many of our very best research institutes are characterised by a fusion of disciplines where, for example, sociologists, political scientists, spatial geographers, and economists collaborate on a common research objective.

The work that emerges from this research is almost always compelling because it is multi-faceted. It extends itself beyond its constituent research community.

Cross-disciplinarity has also expanded at the teaching level of our universities over the past few decades. But a constrained funding environment can provoke a reduction in options.

We must, however, keep our viewfinder broad, because reductionism doesn’t match the expansionist, multi-strand trends emerging in the broader economy. It’s a disconnect.

As universities, as a society, we must be mindful of how important it is to ask questions, to follow our curiosity, to challenge boundaries and to never rest with the answers.

• Read the full speech here.

Barney Glover, Vice-Chancellor, Western Sydney University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

3 Comments

Filed under Reblogs

Skepticism – philosophical or scientific?

by Tim Harding B.Sc., B.A.

(This essay is based on a talk presented to the Victorian Skeptics in January 2017. An edited version was published in The Skeptic magazine Vol.37, No.1, March 2017, under the title ‘I Think I Am’).

Dictionaries often draw a distinction between the modern common meaning of skepticism, and its traditional philosophical meaning, which dates from antiquity.  The usual common dictionary definition is ‘a sceptical attitude; doubt as to the truth of something’; whereas the philosophical definition is ‘the theory that some or all types of knowledge are impossible’.  These definitions are of course quite different, and reflect the fact that the meanings of philosophical terms have drifted over the millennia.  The contemporary meaning of ‘scientific skepticism’ is different again, which I shall talk about later.

I should say at the outset that whilst I have a foot in both the scientific and philosophical camps, and although I will be writing here mainly about the less familiar philosophical skepticism, I personally support scientific skepticism over philosophical skepticism, for reasons I shall later explain.

picture5

But why are these definitions of skepticism important? And why do we spell it with a ‘k’ instead of a ‘c’? As an admin of a large online skeptics group (Skeptics in Australia), I am often asked such questions, so I have done a bit of investigating.

As to the first question, one of the main definitional issues I have faced is the difference between skepticism and what I call denialism. Some skeptical newbies typically do a limited amount of googling, and what they often come up with is the common dictionary definition of skepticism, rather than the lesser known scientific skepticism definition that we Australian skeptics use.  They tend to think that ‘scepticism’ (with a ‘c’) entails doubting or being skeptical of everything, including science, medicine, vaccination, biotechnology, moon landings, 9/11 etc, etc.  When we scientific skeptics express a contrary view, we are sometimes then accused of ‘not being real sceptics’.  So I think that definitions are important.

In my view, denialism is a person’s choice to deny certain particular facts.  It is an essentially irrational belief where the person substitutes his or her personal opinion for established knowledge.  Science denialism is the rejection of basic facts and concepts that are undisputed, well-supported parts of the scientific consensus on a subject, in favour of radical and controversial opinions of an unscientific nature.  Most real skeptics accept the findings of peer-reviewed science published in reputable scientific journals, at least for the time being, unless and until it is corrected by the scientific community.

Denialism can then give rise to conspiracy theories, as a way of trying to explain the discrepancy between scientific facts and personal opinions.  Here is the typical form of what I call the Scientific Conspiracy Fallacy:

Premise 1: I hold a certain belief.

Premise 2: The scientific evidence is inconsistent with my belief.

Conclusion: Therefore, the scientists are conspiring with the Big Bad Government/CIA/NASA/Big Pharma (choose whichever is convenient) to fake the evidence and undermine my belief.

It is a tall order to argue that the whole of science is genuinely mistaken. That is a debate that even the conspiracy theorists know they probably can’t win. So the most convenient explanation for the inconsistency is that scientists are engaged in a conspiracy to fake the evidence in specific cases.

Ancient Greek Skepticism

The word ‘skeptic’ originates from the early Greek skeptikos, meaning ‘inquiring, reflective’.

The Hellenistic period covers the period of Greek and Mediterranean history between the death of Alexander the Great in 323 BCE and the Roman victory over Greeks at the Battle of Corinth in 146 BCE.  The beginning of this period also coincides with the death of the great philosopher, logician and scientist Aristotle of Stagira (384–322 BCE).

As he had no adult heir, Alexander’s empire was divided between the families of three of his generals.  This resulted in political conflicts and civil wars, in which prominent philosophers and other intellectuals did not want to take sides, in the interests of self-preservation.  So they retreated from public life into various cloistered schools of philosophy, the main ones being the Stoics, the Epicureans, the Cynics and the Skeptics.

As I mentioned earlier, the meanings of such philosophical terms have altered over 2000 years.  These philosophical schools had different theories as to how to attain eudaimonia, which roughly translates as the highest human good, or the fulfilment of human life.  They thought that the key to eudaimonia was to live in accordance with Nature, but they had different views as to how to achieve this.

In a nutshell, the Stoics advocated the development of self-control and fortitude as a means of overcoming destructive emotions.  The Epicureans regarded absence of pain and suffering as the source of happiness (not just hedonistic pleasure).   The Cynics (which means ‘dog like’) rejected conventional desires for wealth, power, health, or fame, and lived a simple life free from possessions.  Lastly, there were the Skeptics, whom I will now discuss in more detail.

During this Hellenistic period, there were actually two philosophical varieties of skepticism – the Academic Skeptics and the Pyrrhonist Skeptics.

In 266BCE, Arcesilaus became head of Platonic Academy.  The Academic Skeptics did not doubt the existence of truth in itself, only our capacities for obtaining it.  They went as far as thinking that knowledge is impossible – nothing can be known at all.  A later head of the Academy, Carneades modified this rather extreme position into thinking that ideas or notions are never true, but only probable.   He thought there are degrees of probability, hence degrees of belief, leading to degrees of justification for action.  Academic Skepticism did not really catch on, and largely died out in the first century CE, with isolated attempts at revival from time to time.

picture2

The founder of Pyrrhonist Skepticism, Pyrrho of Elis (c.365-c.275BCE) was born in Elis on west side of the Peloponnesian Peninsula (near Olympia).  Pyrrho travelled with Alexander the Great on his exploration of the East.  He encountered the Magi in Persia and even went as far as the Gymnosophists in India, who were naked ascetic gurus –  not exactly a good image for modern skepticism.

picture3

Pyrrho differed from the Academic Skeptics in thinking nothing can be known for certain.  He thought that their position ‘nothing can be known at all’ was dogmatic and self-contradictory, because it itself is a claim of certainty.  Pyrrho thought that the senses are easily fooled, and reason follows too easily our desires.  Therefore we should withhold assent from non-evident propositions and remain in a state of perpetual inquiry about them.  This means that we are not necessarily skeptical of ‘evident propositions’, and that at least some knowledge is possible.  This position is closer to modern skepticism than Academic Skepticism.  Indeed, Pyrrhonism became a synonym for skepticism in the 17th century CE; but we are not quite there yet.

Sextus Empiricus (c. 160 – c. 210 CE) was a Greco-Roman philosopher who promoted Pyrrhonian skepticism.  It is thought that the word ‘empirical’ comes from his name; although the Greek word empeiria also means ‘experience’.  Sextus Empiricus first questioned the validity of inductive reasoning, positing that a universal rule could not be established from an incomplete set of particular instances, thus presaging David Hume’s ‘problem of induction’ about 1500 years later.

Skeptic with a ‘k’

The Romans were great inventors and engineers, but they are not renowned for science or skepticism.  On the contrary, they are better known for being superstitious; for instance, the Roman Senate sat only on ‘auspicious days’ thought to be favoured by the gods.  They had lots of pseudoscientific beliefs that we skeptics would now regard as quackery or woo.  For example, they thought that cabbage was a cure for many illnesses; and in around 78CE, the Roman author Pliny the Elder wrote: ‘I find that a bad cold in the head clears up if the sufferer kisses a mule on the nose’.

So I cannot see any valid historical reason for us to switch from the early Greek spelling of ‘skeptic’ to the Romanised ‘sceptic’.  Yes, I know that ‘skeptic’ is the American spelling and ‘sceptic’ is the British spelling, but I don’t think that alters anything.  The most likely explanation is that the Americans adopted the spelling of the early Greeks and the British adopted that of the Romans.

picture1

Modern philosophical skepticism

Somewhat counter intuitively, the term ‘modern philosophy’ is used to distinguish more recent philosophy from the ancient philosophy of the early Greeks and the medieval philosophy of the Christian scholastics.  Thus ‘modern philosophy’ dates from the Renaissance of the 14th to the 17th centuries, although precisely when modern philosophy started within the Renaissance period is a matter of some scholarly dispute.

The defining feature of modern philosophical skepticism is the questioning the validity of some or all types of knowledge.  So before going any further, we need to define knowledge.

The branch of philosophy dealing with the study of knowledge is called ‘epistemology’.  The ancient philosopher Plato famously defined knowledge as ‘justified true belief’, as illustrated by the Venn diagram below.  According to this definition, it is not sufficient that a belief is true to qualify as knowledge – a belief based on faith or even just a guess could happen to be true by mere coincidence.  So we need adequate justification of the truth of the belief for it to become knowledge.  Although there are a few exceptions, known as ‘Gettier problems’, this definition of knowledge is still largely accepted by modern philosophers, and will do for our purposes here.  (Epistemology is mainly about the justification of true beliefs rather than this basic definition of knowledge).

picture4

There are also different types of knowledge that are relevant to this discussion.

A priori knowledge is knowledge that is known independently of experience.  For instance, we know that ‘all crows are birds’ without having to conduct an empirical survey of crows to investigate how many are birds and whether there are any crows that are not birds.  Crows are birds by definition – it is just impossible for there to be an animal that is a crow but is not a bird.

On the other hand, a posteriori knowledge is knowledge that is known by experience.  For instance, we only know that ‘all crows are black’ from empirical observations of crows.  It is not impossible that there is a crow that is not black, for example as a result of some genetic mutation.

The above distinction illustrates how not all knowledge needs to be empirical.  Indeed, one of the earliest modern philosophers and skeptics, Rene Descartes (1596-1650) was a French mathematician, scientist and philosopher.  (His name is where the mathematical word ‘Cartesian’ comes from).  These three interests of his were interrelated, in the sense that he had a mathematical and scientific approach to his philosophy.  Mathematics ‘delighted him because of its certainty and clarity’.  His fundamental aim was to attain philosophical truth by the use of reason and logical methods alone.  For him, the only kind of knowledge was that of which he could be certain.  His ideal of philosophy was to discover hitherto uncertain truths implied by more fundamental certain truths, in a similar manner to mathematical proofs.

Using this approach, Descartes engaged in a series of meditations to find a foundational truth of which he could be certain, and then to build on that foundation a body of implied knowledge of which he could also be certain.  He did this in a methodical way by first withholding assent from opinions which are not completely certain, that is, where there is at least some reason for doubt, such as those acquired from the senses.  Descartes concludes that one proposition of which he can be certain is ‘Cogito, ergo sum’ (which means ‘I think, therefore I exist’).

In contrast to Descartes, a different type of philosophical skeptic David Hume (1711-1776) held all human knowledge is ultimately founded solely in ‘experience’.  In what has become known as ‘Hume’s fork’, he held that statements are divided up into two types: statements about ideas are necessary statements that are knowable a priori; and statements about the world, which are contingent and knowable a posteriori.

In modern philosophical terminology, members of the first group are known as analytic propositions and members of the latter as synthetic propositions.  Into the first class fall statements such as ‘2 + 2 = 4’, ‘all bachelors are unmarried’, and truths of mathematics and logic. Into the second class fall statements like ‘the sun rises in the morning’, and ‘the Earth has precisely one moon’.

Hume tried to prove that certainty does not exist in science. First, Hume notes that statements of the second type can never be entirely certain, due to the fallibility of our senses, the possibility of deception (for example, the modern ‘brain in a vat’ hypothesis) and other arguments made by philosophical skeptics.  It is always logically possible that any given statement about the world is false – hence the need for doubt and skepticism.

Hume formulated the ‘problem of induction’, which is the skeptical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense.  This problem focuses on the alleged lack of justification for generalising about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that ‘all swans we have seen are white, and therefore, all swans are white’, before the discovery of black swans in Western Australia).

Immanuel Kant (1724-1804) was (and still is) a major philosophical figure who tried to show the way beyond the impasse which modern philosophy had led to between rationalists such as Descartes and empiricists such as Hume.  Kant is widely held to have synthesised these two early modern philosophical traditions.  And yet he was also a skeptic, albeit of a different variety.  Kant thought that only knowledge gained from empirical science is legitimate, which is a forerunner of modern scientific skepticism.  He thought that metaphysics was illegitimate and largely speculative; and in that sense he was a philosophical skeptic.

Scientific skepticism

In 1924, the Spanish philosopher Miguel de Unamuno disputed the common dictionary definition of skepticism.  He argued that ‘skeptic does not mean him who doubts, but him who investigates or researches as opposed to him who asserts and thinks that he has found’.  Sounds familiar, doesn’t it?

Modern scientific skepticism is different from philosophical skepticism, and yet to some extent was influenced by the ideas of Pyrrho of Elis, David Hume, Immanuel Kant and Miguel de Unamuno.

Most skeptics in the English-speaking world see the 1976 formation of the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) in the United States as the ‘birth of modern skepticism’.  (CSICOP is now called the Committee for Skeptical Inquiry – CSI).  However, CSICOP founder and philosophy professor Paul Kurtz has said that he actually modelled it after the Belgian Comité Para of 1949.  The Comité Para was partly formed as a response to a predatory industry of bogus psychics who were exploiting the grieving relatives of people who had gone missing during the Second World War.

paul-kurtz

Kurtz recommended that CSICOP focus on testable paranormal and pseudoscientific claims and to leave religious aspects to others.  CSICOP popularised the usage of the terms ‘skeptic’, ‘skeptical’ and ‘skepticism’ by its magazine, Skeptical Inquirer, and directly inspired the foundation of many other skeptical organizations throughout the world, including the Australian Skeptics in 1980.

Through the public activism of groups such as CSICOP and the Australian Skeptics, the term ‘scientific skepticism’ has come to symbolise an activist movement as well as a type of applied philosophy.

There are several definitions of scientific skepticism, but the two that I think are most apt are those by the Canadian skeptic Daniel Loxton and the American skeptic Steven Novella.

Daniel Loxton’s definition is ‘the practice or project of studying paranormal and pseudoscientific claims through the lens of science and critical scholarship, and then sharing the results with the public.’

Steven Novella’s definition is ‘scientific skepticism is the application of skeptical philosophy, critical thinking skills, and knowledge of science and its methods to empirical claims, while remaining agnostic or neutral to non-empirical claims (except those that directly impact the practice of science).’  By this exception, I think he means religious beliefs that conflict with science, such as creationism or opposition to stem cell research.

In other words, scientific skeptics maintain that empirical investigation of reality leads to the truth, and that the scientific method is best suited to this purpose. Scientific skeptics attempt to evaluate claims based on verifiability and falsifiability and discourage accepting claims on faith or anecdotal evidence.  This is different to philosophical skepticism, although inspired by it.

References

Descartes, R. (1641) Meditations on First Philosophy: With Selections from the Objections and Replies, trans. and ed. John Cottingham, Cambridge: Cambridge University Press.

Hume, David.(1748) An Enquiry Concerning Human Understanding . Gutenberg Press.

Kant, Immanuel (1787) Critique of Pure Reason 2nd edition.  Cambridge: Cambridge University Press.

Loxton, Daniel. (2013) Why Is There a Skeptical Movement? (PDF). Retrieved 12 January 2017.

Novella, Steven (15 February 2013). ‘Scientific Skepticism, Rationalism, and Secularism’. Neurologica (blog). Retrieved 12 February 2017.

Russell, Bertrand. (1961) History of Western Philosophy. 2nd edition London: George Allen & Unwin.

Unamuno, Miguel de., (1924) Essays and soliloquies London: Harrap.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

2 Comments

Filed under Essays and talks

The Stoic theory of universals, as compared to Platonic and Aristotelian theories

By Tim Harding

The philosophical problem of universals has endured since ancient times, and can have metaphysical or epistemic connotations, depending upon the philosopher in question.  I intend to show in this essay that both Plato’s and the Stoics’ theories of universals were not only derived from, but were ‘in the grip’ of their epistemological and metaphysical philosophies respectively; and were thus vulnerable to methodological criticism.  I propose to first outline the three alternative theories of Plato, Aristotle and the Stoics; and then to suggest that Aristotle’s theory, whilst developed as a criticism of Plato’s theory, stands more robustly on its own merits.

According to the Oxford Companion to Philosophy, particulars are instances of universals, as a particular apple is an instance of the universal known as ‘apple’.  (An implication of a particular is that it can only be in one place at any one time, which presents a kind of paradox that will be discussed later in this essay).   Even the definition of the ‘problem of universals’ is somewhat disputed by philosophers, but the problem generally is about whether universals exist, and if so what is their nature and relationship to particulars (Honderich 1995: 646, 887).

Philosophers such as Plato and Aristotle who hold that universals exist are known as ‘realists’, although they have differences about the ontological relationships between universals and particulars, as discussed in this essay.  Those who deny the existence of universals are known as ‘nominalists’.  According to Long and Sedley (1987:181), the Stoics were a type of nominalist known as ‘conceptualists’, as I shall discuss later.

Plato’s theory of universals (although he does not actually use this term) stems from his theory of knowledge.  Indeed, it is difficult to separate Plato’s ontology from his epistemology (Copleston 1962: 142).  In his Socratic dialogue Timaeus, Plato draws a distinction between permanent knowledge gained by reason and temporary opinion gained from the senses.

That which is apprehended by intelligence and reason is always in the same state; but that which is conceived by opinion with the help of sensation and without reason, is always in a process of becoming and perishing and never really is (Plato Timaeus 28a).

According to Copleston (1962: 143-146), this argument is part of Plato’s challenge to Protagoras’ theory that knowledge is sense-perception.  Plato argues that sense-perception on its own is not knowledge.  Truth is derived from the mind’s reflection and judgement, rather than from bare sensations.  To give an example of what Plato means, we may have a bare sensation of two white surfaces, but in order to judge the similarity of the two sensations, the mind’s activity is required.

Plato argues that true knowledge must be infallible, unchanging and of what is real, rather than merely of what is perceived.  He thinks that the individual objects of sense-perception, or particulars, cannot meet the criteria for knowledge because they are always in a state of flux and indefinite in number (Copleston 1962: 149).  So what knowledge does meet Plato’s criteria?  The answer to this question leads us to the category of universals.  Copleston gives the example of the judgement ‘The Athenian Constitution is good’.  The Constitution itself is open to change, for better or worse, but what is stable in this judgement is the universal quality of goodness.  Hence, within Plato’s epistemological framework, true knowledge is knowledge of the universal rather than the particular (Copleston 1962: 150).

We now proceed from Plato’s epistemology to his ontology of universals and particulars.  In terms of his third criterion of true knowledge being what is real rather than perceived, the essence of Plato’s Forms is that each true universal concept corresponds to an objective reality (Copleston 1962: 151).  The universal is what is real, and particulars are copies or instances of the Form.  For example, particulars such as beautiful things are instances of the universal or Form of Beauty.

…nothing makes a thing beautiful but the presence and participation of beauty in whatever way or manner obtained; for as to the manner I am uncertain, but I stoutly contend that by beauty all beautiful things become beautiful (Plato Phaedo, 653).

Baltzly (2106: F5.2-6) puts the general structure of Plato’s argument this way:

What we understand when we understand what justice, beauty, or generally F-ness are, doesn’t ever change.

But the sensible F particulars that exhibit these features are always changing.

So there must be a non-sensible universal – the Form of F-ness – that we understand when we achieve episteme (true knowledge).

Plato’s explanation for where this knowledge of Forms comes from, if not from sense-perceptions, is our existence as unembodied souls prior to this life (Baltzly 2106: F5.2-6).  To me, this explanation sounds like a ‘retrofit’ to solve a consequential problem with Plato’s theory and is a methodological weakness of his account.

Turning now to Aristotle’s theory, whilst he shared Plato’s realism about the existence of universals, he had some fundamental differences about their ontological relationship to particulars.  In terms of Baltzly’s abovementioned description of Plato’s general argument, Plato thought that the universal, F-ness, could exist even if there were no F particulars.  In direct contrast, Aristotle held that there cannot be a universal, F-ness, unless there are some particulars that are F.  For example, Aristotle thought that the existence of the universal ‘humanity’ depends on there being actual instances of particular human beings (Baltzly 2106: F5.2-8).

As for the reality of universals, Aristotle agreed with Plato that the universal is the object of science.  For instance, the scientist is not concerned with discovering knowledge about particular pieces of gold, but with the essence or properties of gold as a universal.  It follows that if the universal is not real, if it has no objective reality, there is no scientific knowledge.  By Modus Tollens, there is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real (Copleston 1962: 301-302).  (Whilst it is outside the scope of this essay to discuss whether scientific knowledge describes reality, to deny that there is any scientific knowledge would have major implications for epistemic coherence).

This is not to say that universals have ‘substance’, meaning that they consist of matter and form.  Aristotle maintains that only particulars have substance, and that universals exist as properties of particulars (Russell 1961: 176).  Russell quotes Aristotle as saying:

It seems impossible that any universal term should be the name of a substance. For…the substance of each thing is that which is peculiar to it, which does not belong to anything else; but the universal is common, since that is called universal which is such as to belong to more than one thing.

In other words, Aristotle thinks that a universal cannot exist by itself, but only in particular things.  Russell attempts to illustrate Aristotle’s position using a football analogy.  The game of football (a universal) cannot exist without football players (particulars); but the football players would still exist even if they never actually played football (Russell 1961: 176).

In almost complete contrast to both Plato and Aristotle, the Stoics denied the existence of universals, regarding them as concepts or mere figments of the rational mind.  In this way, the Stoics anticipated the conceptualism of the British empirical philosophers, such as Locke (Long and Sedley 1987:181).

The Stoic position is complicated by their being on the one hand materialists, and on the other holding a belief that there are non-existent things which ‘subsist’, such as incorporeal things like time and fictional entities such as a Centaur.  Their ontological hierarchy starts with the notion of a ‘something’, which they thought of as a proper subject of thought and discourse, whether or not it exists.  ‘Somethings’ can be subdivided into material bodies or corporeals, which exist; and incorporeals and things that are neither corporeal or incorporeal such as fictional entities, which subsist (Long and Sedley 1987:163-164).  Long and Sedley (1987:164) provide colourful examples of the distinction between existing and subsisting by saying:

There’s such a thing as a rainbow, and such a character as Mickey Mouse, but they don’t actually exist.

A significant exclusion from the Stoic ontological hierarchy is universals.  Despite the subsistence of a fictional character like Mickey Mouse, the universal man neither exists nor subsists, which is a curious inconsistency.  Stoic universals are dubbed by the neo-Platonist philosopher Simplicius (Long and Sedley 1987:180) as ‘not somethings’:

(2) One must also take into account the usage of the Stoics about generically qualified things—how according to them cases are expressed, how in their school universals are called ‘not-somethings’ and how their ignorance of the fact that not every substance signifies a ‘this Something’ gives rise to the Not-someone sophism, which relies on the form of expression.

Long and Sedley (1987:164) surmise from this analysis that for the Stoics, to be a ‘something’ is to be a particular, whether existent or subsistent.  Stoic ontology is occupied exclusively by particulars without universals.  In this way, universals are relegated to a metaphysical limbo, as far as the Stoics are concerned.  Nevertheless, they recognise the concept of universals as being not just a linguistic convenience but as useful conceptions or ways of thinking.  For this reason, Long and Sedley (1987:181-182) classify the Stoic position on universals as ‘conceptualist’, rather than simply nominalist.  (Nominalists think of universals simply as names for things that particulars have in common).  In a separate paper, Sedley (1985: 89) makes the distinction between nominalism and conceptualism using the following example:

After all the universal man is not identical with my generic thought of man; he is what I am thinking about when I have that thought.

One of the implications of a particular is that it can only be in one place at any one time, which gives rise to what was referred to above by Simplicius as the ‘Not-someone sophism’.  Sedley (1985: 87-88) paraphrases this sophism in the following terms:

If you make the mistake of hypostatizing the universal man into a Platonic abstract individual-if, in other words you regard him as ‘someone’-you will be unable to resist the following evidently  fallacious syllogism.  ‘If someone  is in Athens, he is not in Megara.  But man is in Athens. Therefore man is not in Megara.’ The improper step  here is clearly  the substitution of ‘man’ in the minor premiss for ‘someone’ in the major premiss. But it can be remedied only by the denial that the  universal man  is ‘someone’.  Therefore the universal man is not-someone.

Baltlzly (2016: F5.2-15) makes that point that the same argument would serve to show that time is a not-something, yet the Stoics inconsistently accept that time subsists as an incorporeal something.

I have attempted to show above that Plato and the Stoics are locked into their theories about universals as a result of their prior philosophical positions.  Although to argue otherwise could make them vulnerable to criticisms of inconsistency, they at the same time have methodological weaknesses that place them on shakier ground than Aristotelian realism.  However, I am also of the view that apart from these methodological issues, Aristotelian Realism is substantively a better theory than Platonic Realism or Stoic Conceptualism or Nominalism.  In coming to this view, I have relied mainly on the work of the late Australian Philosophy Professor David Armstrong.

Armstrong argues that there are universals which exist independently of the classifying mind.  No universal is found except as either a property of a particular or as a relation between particulars.  He thus rejects both Platonic Realism and all varieties of Nominalism (Armstrong 1978: xiii).

Armstrong describes Aristotelian Realism as allowing that particulars have properties and that two different particulars may have the very same property.  However, Aristotelian Realism rejects any transcendent account of properties, that is, an account claiming that universals exist separated from particulars (Armstrong 1975: 146).  Armstrong argues that we cannot give an account of universality in terms of particularity, as the various types of Nominalism attempt to do.  Nor can we give an account of particulars in terms universals, as the Platonic Realists do.  He believes that ‘while universality and particularity cannot be reduced to each other, they are interdependent, so that properties are always properties of a particular, and whatever is a particular is a particular having certain properties’ (Armstrong 1975: 146).

According to Armstrong, what is a genuine property of particulars is to be decided by scientific investigation, rather than simply a linguistic or conceptual classification (Armstrong 1975: 149).  Baltzly (2016: F5.2-18) paraphrases Armstrong’s argument this way:

  1. There are causes and effects in nature.

  2. Whether one event c causes another event e is independent of the classifications we make.

  3. Whether c causes e or not depends on the properties had by the things that figure in the events.

  4. So properties are independent of the classifications that we make and if this is so, then predicate nominalism and conceptualism are false.

Baltzly (2016: F5.2-18, 19) provides an illustration of this argument based on one given by Armstrong (1978: 42-43).  The effect of throwing brick against a window will result from the physical properties of the brick and window, in terms of their relative weight and strength, independently of how we name or classify those properties.  So in this way, I would argue that the properties of particulars, that is universals, are ‘real’ rather than merely ‘figments of the mind’ as the Stoics would say.

As for Platonic Realism, Armstrong argues that if we reject it then we must reject the view that there are any uninstantiated properties (Armstrong 1975: 149); that is, the view that properties are transcendent beings that exist apart from their instances, such as in universals rather than particulars.  He provides an illustration of a hypothetical property of travelling faster than the speed of light.  It is a scientific fact that no such property exists, regardless of our concepts about it (Armstrong 1975: 149).  For this reason, Armstrong upholds ‘scientific realism’ over Platonic Realism, which he thinks is consistent with Aristotelian Realism – a position that I support.

In conclusion, I have attempted to show in this essay that the Aristotelian theory of universals is superior to the equivalent theories of both Plato and the Stoics.  I have argued this in terms of the relative methodologies as well as the substantive arguments.  I would choose the most compelling argument to be that of epistemic coherence regarding scientific knowledge, that is, that the universal is the object of science.  It follows that if the universal is not real, if it has no objective reality, then there is no scientific knowledge.  There is scientific knowledge, and if scientific knowledge is knowledge of reality; then to be consistent, the universal must also be real.

Bibliography

Armstrong, D.M. ‘Towards a Theory of Properties: Work in Progress on the Problem of Universals’ Philosophy, (1975), Vol.50 (192), pp.145-155.

Armstrong, D.M. ‘Nominalism and Realism’ Universals and Scientific Realism Volume 1, (1978) Cambridge: Cambridge University Press.

Baltzly, D. ATS3885: Stoic and Epicurean Philosophy Unit Reader (2016). Clayton: Faculty of Arts, Monash University.

Copleston, F. A History of Philosophy Volume 1: Greece and Rome (1962) New York: Doubleday.

Honderich, T. Oxford Companion to Philosophy (1995) Oxford: Oxford University Press.

Long A. A. and Sedley, D. N. The Hellenistic Philosophers, Volume 1 (1987). Cambridge: Cambridge University Press.

Plato, Phaedo in The Essential Plato trans. Benjamin Jowett, Book-of-the-Month Club (1999).

Plato, Timaeus in The Internet Classics Archive. http://classics.mit.edu//Plato/timaeus.html
Viewed 2 October 2016.

Russell, B. History of Western Philosophy. 2nd edition (1961) London: George Allen & Unwin.

Sedley, D. ‘The Stoic Theory of Universals’ The Southern Journal of Philosophy (1985) Vol. XXIII. Supplement.

Leave a comment

Filed under Essays and talks

Socrates on ignorance

‘And is not this the most reprehensible form of ignorance, that of thinking one knows what one does not know? Perhaps, gentlemen, in this matter also I differ from other men in this way, and if I were to say that I am wiser in anything, it would be in this, that not knowing very much about the other world, I do not think I know.’

Leave a comment

Filed under Quotations

Indulge me this: how not to read Daniel Dennett’s comments on philosophy and self-indulgence

The Conversation

Matthew Sharpe, Deakin University

Callicles, Ray Hadley, and—Daniel Dennett?

“A great deal of philosophy doesn’t really deserve much of a place in the world,” leading philosopher Daniel Dennett has recently suggested in an interview at his year’s Association of the Scientific Study of Consciousness conference in Buenos Aires.

“Philosophy in some quarters has become self-indulgent, clever play in a vacuum that’s not dealing of problems of any intrinsic interest.”

People in many other quarters of the world roll their eyes, or blink.

For this kind of accusation against philosophy is hardly new.
The character Callicles in one of Plato’s stories suggests that philosophy is, more or less, child’s play: fit to entertain youths, but hardly a decent pursuit for serious adults.

Radio 2GB stalwart Ray Hadley has more recently taken up something like Callicles’ strains, in what has become a periodic refrain in the tabloids lamenting continuing government funding for humanities research, including in philosophy.

What is new about Dennett’s claims, which is making people within the discipline take notice, is that he is neither a Callicles, nor a Ray Hadley. Daniel Dennett is a decorated Professor of Philosophy of some decades’ experience, and near-universal respect amongst professional scholars.

Dennett also hails from the angloamerican or “analytic” stream of philosophy. This stream has been, until recently, the side of the “analytic-continental divide” a lot less open to weighing philosophy’s history, place and role in society, let alone delivering such strident self-criticisms.

Nevertheless, the Callicles’ of this world should draw breath and read again before too quickly taking Dennett’s criticism as a wholesale dismissal of philosophy, or the reflective humanities.

We can even take Dennett’s provocative remarks as the spur they seem intended by him to have been: a spur to undertake some philosophical reflection about philosophy’s relations to the wider world, as against its insulation from it.

He who doesn’t philosophise…

The first thing to note is that Dennett is not saying that all forms of philosophy are “idle—just games” or a “luxury”. Dennett praises forms of philosophy, like his own contributions to debates on religion and reason (and this Cogito column, gentle reader) that “engage with the world.”

He notes that it takes years for younger generations to “develop the combination of scholarly mastery and technical acumen to work on big, important issues with a long history of philosophical attention.”

But such issues, as he sees things, clearly do exist. And developing the wherewithal to deal philosophically with them is something Dennett evidently values.

When Dennett takes aim at “self-indulgent, clever play in a vacuum”, he has more particular quarry in his sights.

It is just as well. The Greeks had a saying that “he who does not philosophise, philosophises”, and philosophy—as the cradle of all the academic disciplines—has a long history of engaging with and changing the Western world, since about 600 BCE.

Socrates—responding to that other charge the Hadleys’ and Callicles’ of the world will always make (that, far from a harmless indulgence, philosophy harmfully corrupts the youth)—insisted that its role was to assist people in taking care of their souls, and helping them live better lives.

Socrates, who brought philosophy into social affairs

Surely this sounds quaint for our wiser times. The connection between rationally questioning the norms and ideas we entertain and cultivating better lives can also seen opaque, even to Socrates’ bigger fans.

But Socrates’ fundamental idea is simple. Nearly all of the characteristics we admire in people and institutions require forms of knowledge.

The man who would show his courage, but doesn’t know for what cause, is not courageous but foolhardy. He’s unlikely to last long.

The government that would be just, without knowing who and what people and initiatives are worth supporting or censoring, will be unjust.

The person who would live happily but does not know what people truly need to be happy will end up disaffected; and so it goes.

Philosophy, on this original model, is the rational, questioning pursuit of the kinds of knowledge necessary to recognise and promote different forms of human flourishing and excellence. Far from indulgent, it has this much in common with the practical concerns of governors and managers, CEOs and parents: “leaders” of all kinds, as we might say today.

Philosophy, again, involves the attempt to think rationally about the goals of human endeavours, on the basis of the most clear and comprehensive understandings of what kinds of creatures we are, and how we fit into the larger ecology and economies of the world. Far from being indulgent, this kind of thinking seems more necessary than ever today.

For individuals and governments who do not understand the significance of their actions for this wider “whole” (“the truth is the whole”, a famous philosopher said) are bound to pursue short-sighted policies, which produce longer-term problems and “externalities”.

Philosophy, again, has long concerned itself with those difficult, ultimate questions that all people have been posed, whether we ask them or not: is there a God? Is there a soul, life after death, or transcendent meaning to life? How should we live? What is worth pursuing?

To call every person who ever asked these questions, at some point in their lives, indulgent would be to paint nearly everyone who has ever lived with the same, tarring brush.

Philosophy, finally, has since Aristotle been understood by some of its most eminent votaries as the “knowledge of knowledges”.

Aristotle teaching Alexander the Great

Philosophy did not simply give birth to the other disciplines, as you might say. It was “interdisciplinary” from the start. Or at least, it has always been concerned to think through the relations between the different forms of intellectual inquiry and their place in the world. The concern is exactly to prevent particular “cottage industries” (Dennett’s term) proliferating into a cacophony of competing knowledges, without any symphonic wisdom.

Far from being indulgent, universities and governments today still face this form of philosophical issue, as they deliberate about how to manage the academies without which our societies’ historical memory and ability to reflect critically and democratically upon themselves will be sadly diminished:

For as water, whether it be the dew of heaven or the springs of the earth, doth scatter and leese itself in the ground, except it be collected into some receptacle where it may by union comfort and sustain itself […]; so this excellent liquor of knowledge, whether it descend from divine inspiration, or spring from human sense, would soon perish and vanish to oblivion, if it were not preserved in books, traditions, conferences, and places appointed […]

He who does philosophise …

Now, I don’t know whether Daniel Dennett would support everything I’ve tried to say in his defence here. Recalling the different forms of apology for philosophy (another ancient genre), I hope, can help to halt the kind of misreading of his comments as a wholesale “anti-philosophical” tirade that will inevitably sound about.

What is clear is that Dennett is not a critic of philosophy per se, let alone of philosophy in the several (amongst many other) larger senses I’ve picked out here.

What Dennett is critical of is the way academic philosophy is being undertaken, in situations in which a good many of its traditional functions—including reflecting critically about its “utility” and relation to other pursuits and disciplines—are being decided externally to the discipline itself.

For if the different justifications of philosophy we’ve recalled are clear enough, the ways in which philosophy has been funded and institutionalised throughout history have been ceaselessly up for negotiation.

Dennett, very much in the Platonic vein, is especially worried about the next generations of philosophers. He sees the ever-more pressing imperatives they face in order to advance within the institutional settings in which academic studies are today undertaken.

As everyone in the tertiary sector knows, so in this one discipline, “young philosophers are under great pressure to publish”. Nearly all of the material preconditions for ever being able to teach philosophy as a career depend upon meeting this pressure.

Little matter if the budding philosopher has only had the time to develop a limited, if highly cultivated area of specialisation. No matter if that specialisation’s relations to other parts of philosophy, knowledge and society remain unquestioned by him (or, as is less likely, her). “[S]o they find toy topics that they can knock off a clever comment/rebuttal/revival of.”

“These then build off each other and invade the journals, and philosophical discourse,” Olivia Goldhill glosses Dennett, in the article that sparked the present discussions.

Now, this is a very different object of criticism than philosophy per se. It is a form of criticism which it can be imagined has relevance beyond philosophy.

Plato in the first academy.

To criticise a certain form of some activity is not to undermine that activity, after all. It may be a call for needed reforms. Cicero defended rhetoric by saying it got its bad name from a few bad men who misused it. Francis Bacon at the dawn of the modern period echoed this kind of defence.

The prejudices of political men against the life of scholarship per se, he argued, applied only to “deficient” forms of university learning, not liberal education itself, which must be renewed.

But let me end with Plato, since I think Dennett must have had him in the back of his mind as he made his comments, and especially the sixth book of the Republic.

For this founding text of our discipline is all about Plato’s concern with how to recognise and educate good philosophers. The problem is that nearly everything speaks against the young attaining to something like that kind of “scholarly mastery and technical acumen” Dennett recognises amongst the larger goals of a humanistic education.

There are sophists, who promote name over wisdom. There is the appeal of popularity, which lures many of the best students away from their studies into political pursuits. Yet again, there is money-making, that lures many more again away from scholarly pursuits into more lucrative trades.

And, saddest of all for Plato as seemingly for Dennett too, some amongst the young who have been taught clever forms of dialectical argumentation too early fall prey to cynicism or “misologia”: a scorn for the whole business of true philosophy like that of Callicles, who had a sophistic training himself.

The ConversationMatthew Sharpe, Associate Professor in Philosophy, Deakin University

This article was originally published on The Conversation. (Reblogged by permisson). Read the original article.
 

Leave a comment

Filed under Reblogs

Isaac Asimov on ignorance

 

Leave a comment

Filed under Quotations