Why No One Knows Anything
By David Metcalfe
September 14, 2018
Today I would just like to briefly discuss a basic concept of limitations in human knowledge. It is something that has frustrated me for quite some time, and today, after reading some articles from post-modern thinkers, I have become all the more frustrated with the concept. I suppose this is my version of venting, but also, I hope my venting will get your brain thinking a bit, and inspire you to continue on that path.
What I will be arguing today is very simple: you don’t know anything, and neither do I.
Let’s start by addressing something that you definitely think you know: 1+1=2.
It might sound simple, but it’s not. First of all, we don’t even know what 1 is. Do we mean an abstract, theoretical number or a physical object? If we mean an abstract, theoretical number, how do we even know that this number exists? If we mean a physical object, what does being “1” entail?
The classic example is 1 apple and 1 apple being placed together to make 2 apples. But what is “1” apple? Since size is a continuous, infinite quantity, there is no way they are both the same size. So, we can’t base it off their size. What we might base it on is its identity as an apple. That’s basically what Bertrand Russel and Alfred North Whitehead did in their massive collection of mathematical proofs, called “Principia Mathematica”.
But it still doesn’t solve the problem of applying abstract identities to concrete objects. Like, why do we call it an “apple” anyway? What if we had instead called it an “orange”? Obviously, the name is arbitrary, based in nothing. But it does need to have an agreed upon identity. Like, we couldn’t call some apples “oranges” while still referring to most apples as “apples”, because our language system requires shared identities in order to make any sense at all. If we dropped identity-language adherence, we would essentially be reduced to babble.
Ok, so we agree that if a certain fruit is a descendant of the thing we have decided to call an “apple”, then it is, in fact, an apple. But what if one of the apples had a bite taken out of it? Then would it still be 1 apple? What if it had 50 bites, and was just a tiny piece of core left? What makes 1 apple, 1 apple?
And if we can’t even figure out how 1 object can exist, good luck trying to figure out how 2 exist.
1+1=2 can get really complicated really quickly. Brilliant mathematicians, to this day, spend years just trying to figure out whether 1+1=2. By the 23rd century, they may get to working on 2+2=4 🙂
The point I’m trying to make here is this: 1st graders know that 1+1=2, and genius mathematicians don’t.
The Tiny Knowledge Range
So, how does a 1st grader come into the knowledge that 1+1=2? Well, it’s what they are taught. But it’s also something they would likely come about through experience. They would see a tree beside another tree in a field, and they would recognize those as two distinct things, both defined as “trees”.
But if a person were, in theory, to never come across multiple objects their entire life, and were never taught that 1+1=2, would that still be true to them?
There are two kinds of knowledge: a posteriori and a priori. We’ve already discussed a posteriori; it is the knowledge gained from experience in the world, like being taught something, or seeing two trees. But a priori is knowledge that can be gained through reason alone. For example, if A is equal to B, and B is not equal to C, then we know that A is not equal to C.
While it’s great that certain pieces of knowledge can be acquired through reason alone, it needs to have some kind of content to work with. So yes, we can deduce from things we know to be true, but our a priori is still heavily dependent on our a posteriori. Or, essentially, we need to know at least some things from experience in order to know anything from independent reason. It’s as Immanuel Kant said in his famous work “Critique of Pure Reason”, that, “All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.”
But here’s the problem: we know very little about anything, and the things we think we know, we don’t actually know.
Let’s say you went to the world’s best historian and asked them a few questions. I guarantee that even if you asked them the birth year of 10 very prominent historical figures, they would get several wrong. Even a historian who spends their entire life studying one year in history will still not know the vast majority of things that happened in that year. Same goes for biologists, sociologists, or any academic discipline.
Beyond that, we don’t know that the information they’re getting is completely true, and in fact, there’s no way it’s completely true. Even if they have the best research methods possible, they still have to interpret the research into a conclusion, and there’s no way to remove all bias, and no way to perfectly process information.
And this is the best conditions possible. Very few of us devote ourselves to academic pursuits at all. Majority of people don’t have the faintest clue about much of anything. But, of course, the less educated people, ignorant of the vastness of possible knowledge and their own bias, often believe that they have everything figured out. It’s as Russell used to say, “The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.”
It’s true, in a great many senses. Of course, most people interpret the quote as meaning things in social and political discourse, which is certainly the case. Like, anyone who thinks they have solved political philosophy or economic policy is clearly not aware of their own ignorance, and there seems to be a lot of simpletons who think they have accomplished that task. But more so, think back to the math example. When we study, we become aware of how much possible knowledge there is, and realize that we have essentially none in comparison. It’s the old adage, “the more you learn, the less you know.”
Is Learning Even Worth It?
The question arises, I think, of whether one should bother to invest in pursuit of knowledge at all. After all, one can study their whole life and not gain a fraction more knowledge than a common idiot. In fact, they might find they know less than the idiot, relatively speaking.
In Bertrand Russell’s fantastic work, “The Will To Doubt”, he says this of knowledge:
“None of our beliefs are quite true; all have at least a penumbra of vagueness and error. The methods of increasing the degree of truth in our beliefs are well known; they consist in hearing all sides, trying to ascertain all the relevant facts, controlling our own bias by discussion with people who have the opposite bias, and cultivating a readiness to discard any hypothesis which has proved inadequate. These methods are practiced in science, and have built up the body of scientific knowledge.
Every man of science whose outlook is truly scientific is ready to admit that what passes for scientific knowledge at the moment is sure to require correction with the progress of discovery; nevertheless, it is near enough to the truth to serve for most practical purposes, though not for all. In science, where alone something approximating to genuine knowledge is to be found, men’s attitude is tentative and full of doubt.”
Basically, all scientific theories are wrong. They are constantly changing. No modern scientist believes in Newton’s theory of gravity, or Darwin’s theory of natural selection as it was when they conceived of it. However, these theories have approximated enough at truth that we can see tangible results, and the results get better as we get closer to the actual truth. Or, in other words, we know that science works because we have cars, cell phones, space crafts, etc. that really do work.
But, in other matters, we may not be able to see tangible results so easily. Russell also said that “Science is what you know, philosophy is what you don’t know.”
And it’s true. There are a great many things in philosophy which are unknown, and possibly can’t ever be known. But just like science, we have figured out certain things that seem to work. Philosophy only works when grounded in a more tangible discipline, such as that of science, history, or social science. We can look back to history as sort of a “scientific test” of whether certain philosophies have worked. It’s not hard to see that we are not interested in having a tyrant control our every thought and action when we look back on history, for example.
The problem that arises, though, is that we are coming at every philosophical idea with some kind of bias. Like, when we think it’s better to not be ruled by a tyrant, we are speaking based on our own preconceived ideas of what good government is. But who’s really to say what’s better? Michel Foucault brings up this question in his book, “Discipline and Punish”. It’s a fair question: why do we, for example, assume upholding individual rights to be superior to maintaining order by any means necessary?
Many people love religion for this reason. It allows one to cling to an idea that “God said so”. It’s like when a parent can’t give a child a good reason for something, so they say, “because I said so”. It’s an appeal to authority that rids real philosophical inquiry, and replaces it with human assumptions, made to think it superior. However, if one does serious investigation into any religion, they find that theology is just as complex and unknowable as secular philosophy. Eventually, studying anything enough will make you realize you know nothing.
The response to this fact, that we know nothing, has been in the form of what is called “post-modernism”. It rejects any knowledge as being superior to other forms, because it is all socially situated, and therefore, biased. We also have so little access to all of the possible knowledge, that we cannot form any objective views on anything, because they will undoubtedly be wrong.
It’s not necessarily a bad way to think, because it does take into account a lot of things, and has potential efficacy to make us less confident in things that we should not be confident in. But I think the trouble lies in that very fact; humans need something to believe in. America was founded through its strong beliefs that all humans were endowed by their creator with natural rights. Then you have thinkers like Jeremy Bentham who come along and offer very strong critiques of it. And that’s good, because any truth should be able to stand the test of argument. But the problem is, if we don’t have any kind of unity in belief, it gets really difficult for us to co-operate together, and even just to live our own lives with a sense of purpose.
The reason that I don’t think religion will ever die is because it offers people a sense of objective meaning and purpose. Philosophy is great, but ultimately, it leaves us not knowing anything. That’s not to say that all philosophers should give up and join some religion, of course, but what it means is that to make philosophy effective in the world, it needs to have some kind of objectivity that places certain things as superior to other things. The enlightenment thinkers thought that great thing was reason. Things that were unreasonable were worse than reasonable things, basically. But, of course, we needed to leave room for emotion, which is where the romantic movement spawned from.
I would think, and I suppose hope, that we can move towards accepting our limitations in knowledge (that are admittedly significant) without throwing out everything we do know. We’ve seen the devastating consequences of treating certain people as superior because of their skin color. We’ve seen theories in biology cure cancer. We’ve seen our physics theories take us to the moon. We’ve seen our psychological theories save people from depression. We’ve seen a political philosophy based on human rights lead to the world’s most prosperous and successful nation of all time.
We’ve seen it, and seeing is believing.
So yes, no one knows anything, but I think we can all believe in something.