Back to IN

Instrumental Nihilism — Foundations


Part I. Philosophy

1.1. A debate without a winner

In 2011, Sam Harris and William Lane Craig held a public debate at Notre Dame on whether morality depends on God1. Two hours, a packed auditorium, YouTube clips that have since been watched millions of times. Harris is a neuroscientist and a New Atheist, with several bestsellers to his name. Craig is an analytic philosopher who made his career on the cosmological argument2, one of the oldest proofs ever offered for God's existence. Both were well-prepared. Both knew their material cold. Both were certain.

When the debate ended, nothing had changed. The audience split the same way it walked in: atheists cheered Harris, believers cheered Craig, and everyone left with their prior views reinforced. At first glance this looks like the format failing. Two smart people cannot persuade each other, the audience will not budge.

But Harris and Craig were not actually disagreeing. They were talking past each other about different things. Harris argues from one set of premises; Craig argues from another. For Craig, the felt experience of "God's presence" is a perfectly good place to start reasoning from. For Harris, that same experience is a brain state with a neurochemical explanation, which tells us nothing about anything outside the skull. Their disagreement is not over arguments. It is over what counts as an acceptable place to start. And no argument built on top of that split can resolve it, because each argument already sits on one side or the other.

You might think this is a quirk of theology. God is a special subject, people are emotionally invested, reason gets drowned out. But the pattern repeats wherever a dispute runs deep enough. The utilitarian says the right act is the one that produces the best overall outcome. The deontologist says some acts are wrong regardless of outcome. Is it acceptable to torture one person to save a hundred? The utilitarian will often say yes, the deontologist will say no, and both will offer reasons you cannot easily refute. But those reasons grow from different ideas of what makes an act right to begin with. The argument about torture turns into an argument about foundations, and at that level neither side can move the other, because moving would mean swapping foundations first. You see the same thing between determinists and compatibilists, Keynesians and Austrians, liberals and conservatives. People argue, trade reasons, and eventually reach the layer where reasons run out and assumptions take over. The conversation circles. Almost no one changes their mind.

Let me try to go down to that layer and look at it. Not to find a winner, but to see why there is never a winner.

1.2. Looking for the bottom

The history of philosophy is a long record of attempts to find a foundation that cannot be doubted. Something to build on. A point that holds itself up without axioms pulled from nowhere.

Descartes tried the method of systematic doubt, throwing out anything he could question, and ended with cogito ergo sum3. I think, therefore I am. It sounds like bedrock. But three assumptions are buried in the sentence, and Descartes accepted all three without looking. The "I" assumes that some stable subject stands behind the thinking, and that is not obvious. "Think" assumes that Descartes correctly identified and named the process happening inside him, that it really was "thinking" rather than something we have no word for. And what, exactly, is a thought? "Therefore" assumes that logical inference works, that one statement can produce another. Descartes did not hit bottom. He stopped digging.

Try something simpler: the world exists. How do we know? Through the senses? Then what we know is not the world itself but what the nervous system builds out of electrical signals — a model, not the original. Through reason? Then we first have to show that reason is reliable, which is another problem needing its own answer. Through intuition, or revelation, or some Platonic space of ideas? Each of these routes brings its own equipment, and the equipment needs the same kind of justification as the thing it is supposed to justify. The world exists is not a starting point. It is a conclusion, and every path to it already rests on something untested.

Simpler still: something is happening. But "happening" implies change. Change implies time, a before and an after. And a before and an after imply an observer to register the difference. The observer is the thing we were trying to ground in the first place. We are in a circle.

Push it to the limit: being. But being without a subject is just a word. It does not mean anything on its own.

1.3. The trilemma

Agrippa, a Greek skeptic, laid out this problem as a trilemma4. Any attempt to justify anything ends in one of three ways.

The first is infinite regress. Every foundation needs another foundation under it. That one needs another. The chain never terminates. There is no final link holding itself up.

The second is circularity. A grounds B, and B grounds A. This is not a contradiction, strictly, but it is hollow: you have just declared two statements to be each other's foundations and called that a solution.

The third is dogma. At some point you say: here is what I accept without justification, this is where I stop, this is where I build from. That is honest, at least, but you have to admit the whole structure rests on something you picked. You could have picked something else.

Two thousand years after Agrippa, philosophy has proposed several workarounds, none of which fully resolves the problem.

Rationalism claims that reason has direct access to truth, through logic or some kind of intellectual intuition. The trouble is that this is a claim about reason, made by reason. The instrument vouches for itself. Descartes walked straight into this: to justify the reliability of thinking, he had to posit God as a guarantor, and to establish God's existence, he had to think.

Empiricism says we should build on observation. Locke, Hume, the logical positivists. Observation looks solid until you ask why past observations should tell us anything about the future. Hume showed that induction — the basic move of any science — has no logical justification5. That a stone has fallen every time I let go of it does not entail that it will fall next time. I am confident it will. But that is confidence, not proof. Expectation, not inference.

Coherentism gives up on foundations altogether. Knowledge is not a building on rock, but a web where each node is held in place by the others. Quine called this the "web of belief"6 — a network that rearranges itself from inside, with no external anchor. This is closer to how thinking actually works than anything that came before. But the web has a familiar problem: it can be perfectly coherent and have nothing to do with reality. Middle-earth is internally coherent. Ptolemaic astronomy was coherent for fifteen hundred years, epicycles and all, and it gave decent predictions. Coherence is a good property. It is not enough.

Kant went a different direction7. His idea, roughly, is that we wear a pair of glasses we cannot take off. Everything we see, we see through them — space, time, causation. These are properties of the glasses, not of the world. The world itself, without the glasses, is unreachable. We can only know how it looks through our lenses. But the lenses themselves we can study. We can establish that we inevitably see things in space, in time, as causal chains, and that knowledge is reliable, because it is knowledge about us, not about the world.

This is persuasive until you ask what we are studying the glasses with. The same reason that looks through them. We are trying to inspect the instrument using the instrument. How do we know we have correctly identified which lenses are fitted in the frame? Through reflection — but reflection also passes through the same lenses.

One could go on: phenomenology, structuralism, deconstruction. Each movement moves the point where the assumptions begin. None of them gets rid of it. Husserl wanted philosophy to be a rigorous science without presuppositions, and his "phenomenological reduction"8 is, at bottom, another way of saying: here is where I stop, and here is where I start building.

1.4. The limits of the instrument

You might hope the problem will eventually dissolve. That we simply have not found the answer yet, but it is out there somewhere and philosophy is getting closer. I don't think so. There are specific reasons to think our cognitive equipment is bounded.

The first reason is linguistic. Wittgenstein wrote in the Tractatus: "Whereof one cannot speak, thereof one must be silent"9. What does that mean? Any statement is a combination of elements following rules. That is how language works. If what you want to express does not fit inside the available elements and rules, you will not produce a statement. You will produce a string of words that looks meaningful but carries nothing. The attempt to state an absolute foundation might be exactly this kind of operation: it looks like it should yield something, and it does not. Like dividing by zero. You can write it down. You cannot compute it.

The second reason is biological. Colin McGinn calls it "cognitive closure"10. The brain is a finite system with a particular architecture. A rat cannot solve a differential equation — not because it is stupid, but because it does not have the necessary cognitive machinery, and it never could. No amount of rat-lifetimes of training would ever produce the result. There is no reason to assume the human brain is free of comparable blind spots. Some problems may be unsolvable for us not because we lack time or data, but because our architecture simply cannot represent them.

The third reason is formal, with a caveat. Gödel proved that in any sufficiently powerful consistent formal system, there are true statements that cannot be proved inside that system11. This is a mathematical result about formal systems, not about "reality" or "consciousness." Applying it directly is a stretch. But if thinking reduces to something like a formal system — and for a materialist that is at minimum a working assumption — then the same limitations apply to thinking. And if thinking does not reduce to a formal system, then we cannot describe it formally at all, which is also a limitation, just of a different shape.

An absolute foundation may exist. There may be some reference point on which everything rests. I cannot rule it out. But our means — language, logic, a brain with a given architecture — are probably not equipped to detect it or formulate it. And we hit the same wall trying to reach past the boundary: to say anything about what lies beyond the instrument, we have to use the instrument. This is not a ban on thinking about it. It is an observation that we cannot think about it productively.

1.5. What to do with this

All of the above reduces to one point: we have no absolute starting point. Every attempt to find one runs into assumptions that themselves rest on nothing. Agrippa saw it, Hume saw it, Wittgenstein saw it. The groundlessness of foundations is where philosophy lands.

Life does not pause while we sort out epistemology. As I was writing this I drank coffee, picked up my phone, thought about work. My brain kept generating thoughts, estimating risks, making decisions, and it did not wait for me to settle my epistemology. It was doing all this before I ever heard the word, and it will keep doing it after.

Everyone is in the same situation. The rationalist dodges a car not because he has rationally established that the car exists, but because his body reacts faster than his mind can form a sentence. The skeptic eats when he is hungry, even though strictly speaking he cannot prove food exists. We are all already inside a particular mode of engaging with the world: building models, testing them against results, updating them.

None of this makes philosophy useless. It does one important thing: it clarifies what we are actually doing when we think. It has shown that a foundation is unreachable, that language is bounded, that perception is unreliable. Those are genuine results. But they describe a situation. They do not tell anyone what to do. I do not need to solve the problem of induction in order to get out of bed and go to work. Nobody does.

The question that remains is this. Can we describe how we are already built, without adding anything on top? Without inserting something that is not there — God, objective meaning, absolute morality — and without denying what is?

1.6. Instrumental Nihilism

I have tried to do this, and I call the result Instrumental Nihilism.

The assumptions go like this. We are biological systems. Everything we experience — thoughts, emotions, the feeling of meaning, pain, joy — comes from processes in the brain. We have no direct access to reality, only to the models the nervous system builds. We judge those models by their results, because we have no other way to judge them. And finally: meaning, value, and morality have no external source. God did not hand them to us. Nothing in the cosmos did. They are produced inside a particular kind of system, which is to say, inside us. That does not make them less real. A headache is not a property of the cosmos either, and it is still hard to ignore.

An obvious objection comes up here. I just spent five sections showing that any position rests on unjustified assumptions. And then I put forward my own. Why are mine any better?

Honestly, they are not. They are exactly the kind of dogma Agrippa described: the place where I stop digging and start building. I cannot prove that experience is produced by the brain rather than the soul. I cannot prove there is no external source of meaning. The first part of this essay was precisely about that: you cannot prove a position is correct. I am not going to pretend I have managed it.

But here is what I can say. These assumptions describe how people already behave. All of them. Regardless of what philosophy they profess.

You see it all the time. A believer who thinks God made the world and filled it with meaning still goes to the doctor when he breaks his leg, not to church. He takes medications at dosages worked out in clinical trials, not ones determined by prayer. A skeptic who thinks reality does not exist still eats when hungry and sleeps when tired. His body does not wait for a philosophical justification. A committed determinist still plans his life, weighs options, picks the best one — as though he had the very choice his doctrine says he does not have.

You could object that philosophical positions sometimes do affect behavior. A deeply religious person may refuse a blood transfusion. A convinced fatalist may stop seeking treatment. That is true, and those are precisely the cases where a position gets into conflict with the baseline behavior and causes a problem. The proposal of Instrumental Nihilism is not to install such conflicts deliberately.

The name needs explaining, because both halves of it are misleading on their own.

"Nihilism" usually means nothing matters. Here it means something more specific. Meaning exists, but it has no source outside the human being. There is no objective meaning out there to find. There is only the meaning the brain produces on its own. And that is something we can work with.

"Instrumental" means that everything, including this framework, is judged by one question: does it work, or does it not? Not true or false. Work, in the sense of letting you live without contradicting the way you are actually built.

Next: science as an instrument, and what it tells us about ourselves.


Part II. Science

2.1. A poor tool, but the best we have

The first part lands on this: the only criterion available to us is whether something works. Science is that criterion formalized. Hypothesis, test, result. A systematic effort to tell what actually works from what only appears to.

This is what makes science the best tool we have. Best, because it comes with a self-correction mechanism built in: when a result fails to replicate, the theory gets revised. Religion does not revise its doctrines in light of new data. Philosophy can debate a thesis for a century without any way to test it. Science makes mistakes, but it has a procedure for finding them.

"Proved by science" does not mean "true," though. It means: under these conditions, on this sample, using this method, this result was obtained and was reproducible. A scientific result is not a verdict on reality. It is the best approximation currently available.

That distinction seems pedantic until you notice how often the approximation has changed. Earth used to be the center of the universe; Ptolemy's epicycles gave accurate predictions and held for fifteen hundred years. Then they stopped. Stomach ulcers were caused by stress and poor diet — everyone knew this — until Barry Marshall swallowed a culture of Helicobacter pylori, gave himself an ulcer, and cured it with antibiotics12. Dietary fat was blamed for heart disease, and national food policies were built on the claim. Thirty years later the picture turned out to be much more complicated, and part of the reason for the confusion was that the sugar industry had paid for research designed to point attention elsewhere13.

There are also systemic problems. In 2015, a team led by Brian Nosek tried to reproduce one hundred studies from top psychology journals. Fewer than half reproduced14. This is the replication crisis, and it is not confined to psychology. Biomedicine, economics, and nutrition show the same pattern. The causes are concrete. P-hacking, where researchers tune analytical parameters until they hit statistical significance. Publication bias, where journals print positive results and shelve negative ones. Career incentives that reward scientists for volume of publications rather than accuracy. The knowledge-production system keeps generating a certain amount of garbage, as a predictable consequence of the incentives it runs on. This is not a conspiracy theory. It is what the system is designed to produce.

None of this is a reason to abandon science. It is a reason to treat science as a process, not as an oracle. Not to turn it into the new religion. Any given study may be wrong. A meta-analysis is more reliable, but still no guarantee. Scientific consensus has shifted before and it will shift again.

I say this here because what follows leans on research. "Leans on" means "uses as the best available option," not "accepts as final truth." The research may not be final truth. But while the model works, I use it.

2.2. The prediction machine

Over the last thirty years, several lines of neuroscience have come together around a single picture. None of them claims to be absolute; they are the approximations I just described.

Karl Friston proposed treating the brain as a machine whose job is to minimize prediction error15. The brain is constantly generating a model of what is about to happen in the next instant, and comparing it to what the senses actually deliver. The gap between the two, the prediction error, is the signal the system responds to. In practice it looks like this. You walk down a familiar staircase and misjudge a step. Your foot drops into empty space, your body jolts, your heart rate spikes, your hands shoot out. Nothing is really wrong. The prediction failed to match reality, and the system reacted automatically. Perception works the same way, except the errors are usually tiny and you do not notice them. Andy Clark pushed this further: the brain is not a computer taking input in and turning it into output16. It is generating a picture of its surroundings all the time, and using incoming data mainly to correct that picture. The classic example is the retinal blind spot. Each eye has a patch where there are no receptors, and yet you never see a hole in your visual field. The brain fills it in. It is not receiving information, and it is not processing the absence of information. It is continuing to generate the model as if the data were there.

Something non-obvious about motivation follows from this. Moving toward a goal is a way of reducing uncertainty. A system with a model of the future generates specific predictions and tests them. If they keep matching reality, things run smoothly. A system with no goal and no direction has nothing to predict. The model of the future goes blurry, the incoming data has nothing to match against, and background uncertainty rises. This is why people in bad psychological states cling to routine — not for pleasure, but for predictability. Morning coffee, familiar route, work tasks. All of it reduces uncertainty, even when it produces no enjoyment. The reverse also holds: someone whose familiar structure gets stripped away, who gets fired or divorced or moves to another country, often slides into anxiety even when, on paper, things have improved. The system has lost the model it used to make predictions with, and until it builds a new one, the background feels unsafe.

2.3. Who is in charge

The prediction machine runs mostly without our involvement. But how mostly?

Intuitively, it feels like "I" make the decision and the body carries it out. The experimental evidence paints a different picture.

Michael Gazzaniga spent decades studying patients whose corpus callosum had been surgically cut to treat severe epilepsy17. In one of the classic experiments, the right hemisphere was shown one image — a snowy scene, say — while the left hemisphere was shown a different one, a chicken. The setup exploited the fact that the left visual field is processed by the right hemisphere and vice versa, so each hemisphere only saw one image.

The patient then used his left hand, controlled by the right hemisphere that had seen the snow, to pick out a shovel. Asked why he picked the shovel, his left hemisphere, which had not seen the snow but controlled speech, did not say "I don't know." It immediately invented an explanation, and the explanation was wrong: "The shovel is for cleaning out the chicken coop." Gazzaniga called this module the interpreter. In split-brain patients its work is laid bare, but Gazzaniga argues the interpreter runs in everyone. The split brain just makes it visible.

Nisbett and Wilson, in 1977, showed the same phenomenon in people with normal brains18. In one experiment, shoppers were asked to pick the best pair of pantyhose from several laid out in a row. The pantyhose were identical. People consistently chose the rightmost pair, a well-known position effect. But when asked why they had chosen that pair, nobody said "because it was on the right." They pointed to the texture, the color, the feel — confidently explaining a choice whose real cause they had no access to.

Benjamin Libet, in 1983, found that the brain begins preparing for a movement hundreds of milliseconds before the conscious decision to move19. This was widely taken as evidence that consciousness does not initiate action. Aaron Schurger, in 2012, offered an alternative: the signal Libet read as the start of the decision might actually be random neural noise, with the actual decision occurring later, close to the moment of awareness20. The question is not settled. But for this argument it does not need to be. Gazzaniga and Nisbett show directly that the brain invents explanations without knowing the real causes, and it does so confidently and without hesitation.

Daniel Kahneman described the same picture from a different angle21. System 1 is fast and automatic: you are reading these words right now without "deciding" to understand English. System 2 is slow and conscious: if I ask you to multiply seventeen by twenty-four, you feel the effort. Most of what we do during the day is System 1. System 2 only engages when System 1 cannot handle something, and it disengages at the first opportunity, because it costs energy. You do not choose which hand to pick up a cup with. You do not decide which muscles to tense to keep your balance. You do not plan how to articulate the next word. All of it is done for you, and "you" only find out about it afterwards.

Antonio Damasio showed what automatic decisions actually look like in practice, and what role the body plays in making them22. He worked with patients who had damage to the ventromedial prefrontal cortex, the region that connects emotion to decision-making. These patients kept their intelligence, their logic, and their memory. They could list every argument for and against any option. But they could not choose. One patient spent thirty minutes deciding which day to schedule his next appointment, weighing his schedule against the weather against his other commitments, until Damasio picked for him. Without a bodily signal of "this one is good" or "this one is bad," the logic keeps running, but the process never closes. The analysis spins in place like an engine with no clutch. Damasio called these signals somatic markers: the slight tightness in the body for the bad option, the sense of ease for the good one. This is what actually brings a decision to a close. The familiar split between reason and feeling, where reason is useful and feeling gets in the way, does not correspond to what the brain actually does. Any real decision is a fusion, and without the emotional component it never finishes.

Gazzaniga, Nisbett, Kahneman, and Damasio were studying different things by different methods. They all reached a similar observation: what we experience as conscious control is mostly reconstruction after the fact. The brain makes a decision or a choice, consciousness picks up the result and builds a story around it, complete with causes and motives. This does not mean consciousness is useless. Kahneman's System 2 does exist, and the time you sat down to budget your month or weigh whether to change jobs, that was it. But it engages rarely, runs slowly, and burns a lot of energy. The default mode is automatic, with explanation supplied after the fact.

One practical consequence: "just decide to be happy" or "just stop being anxious" does not work. The decision is System 2. The state is System 1. There is no direct wire between them. Ordering yourself to stop being afraid is roughly like ordering your pupil not to contract in bright light. To change a state, you have to change the inputs System 1 responds to: environment, habits, workload, sleep. You do not make a decision to feel differently. You construct the conditions under which the system will come to feel differently on its own.

2.4. Baseline and change

If a state is shaped by its inputs, which inputs matter more: big events, or the daily background?

Philip Brickman, in 1978, compared happiness levels of lottery winners and of people who had been paralyzed in accidents23. The result was counterintuitive. A year later, the difference between the groups was minimal. Both the jackpot and the catastrophe had faded, and people had returned to their baseline. The brain recalibrates expectations to the current reality: what was a joy yesterday is just normal today. This is hedonic adaptation.

The implication for daily life is direct. A promotion, a move, a purchase — each of these delivers a spike that fades within weeks or months. We know this from our own experience. A new phone is exciting for a week, then it is just a phone. A new apartment is exciting for a month, then it is just an apartment. Chasing events is a bad strategy, because each next event has to be bigger than the last, and resources are finite.

What works instead is the steady background. Sleep, habits, environment, daily workload, the quality of your relationships. These do not produce peaks, but they shape the baseline that the brain returns to after every spike.

Which leaves the question: can the baseline itself be shifted? Or are we stuck with whatever genetics and childhood handed us?

Until late in the twentieth century, the adult brain was thought to be fixed. What you had was what you had. In 2000, Eleanor Maguire showed that London taxi drivers, who spend years memorizing the map of the city, had physically larger hippocampi than control groups24. Not from birth. The hippocampus grew on the job. The brain reorganizes itself throughout life. New synaptic connections form, old ones weaken, and regions sometimes take over functions of damaged ones. This is neuroplasticity, and it is what makes the whole thing workable. If the brain were fixed, everything I have just described would be a diagnosis: here is how you are built, make peace with it. But the brain does change, which means the baseline can be shifted. Not by an act of will, not instantly, not on demand. Through systematic change of the inputs: habits, environment, workload, information, substances. Slowly, yes. But possibly.


Part III. The idea

3.1. What emerges

The first part ended on the conclusion that no absolute foundation exists, and probably none can. The second showed how the system we happen to be is actually built. The two lines converge. If the only criterion available to us is whether something works, neuroscience gives us a map of what works and why. Not a map of reality. A map of ourselves. Rough, incomplete, but good enough to navigate by.

This changes how the questions traditionally called philosophical get posed. Instrumental Nihilism does not answer them. It translates them out of "what is true" and into "what works, and for whom."

What is the meaning of life? becomes: what configuration of meaning, for a particular person, cuts down on uncertainty, provides a sense of direction, and lets them function. What is moral? becomes: what rules let people cooperate without destroying each other. Which political system is the correct one? becomes: which institutions do not collapse under actual human constraints — selfishness, bounded rationality, incomplete information.

Each of these deserves its own essay.

3.2. Limits

Every framework has things it does not cover. Those are worth naming once, here, so we do not have to keep coming back to them.

The hard problem of consciousness is still open. Everything in Part II describes mechanisms: how the brain predicts, how it makes decisions, how it constructs explanations after the fact. But why any particular configuration of neurons should be accompanied by experience, rather than processing signals in the dark, has no answer. This is what Chalmers calls the hard problem25, and Instrumental Nihilism does not solve it. Neither does anyone else. The framework works without solving it, the way an engineer works with electricity without having a final theory of charge.

Instrumental Nihilism is materialist, not reductionist. Saying "meaning is a property of the model, not of the world" is not the same as saying "meaning is an illusion, forget about it." Pain exists inside a particular system and has no external source, and it still hurts. The sense of emptiness at four in the morning, the joy of a good conversation, the knot of anxiety before a major decision — these are real in the only sense available to us. They are states that influence how the system behaves.

The model is an approximation, not a final theory. Part I explicitly says that any model is the best one available right now. This one is the same. It may turn out to be incomplete, or wrong, or unsuitable for someone else. The neuroscience Part II rests on is itself in progress. Models are being revised, data refined, consensus shifting. In twenty years the map may look different. To demand finality from a framework is to demand the very thing we gave up at the start.

3.3. The model

If you have read the whole essay, what follows is a map of where we have been. If you started here, it is the minimum you need for what comes next.

Foundations. There is no absolute foundation for knowledge. Every justification ends in regress, circularity, or dogma. Our equipment — language, logic, a brain — probably cannot formulate a foundation. This is not a problem. People act on assumptions without waiting for them to be justified. Science is the best available formalization of "does it work," but it is a process, not an oracle, and "proved by science" means "best approximation available at the moment."

The system. The brain is a prediction machine that continuously generates a model of the world and corrects it against incoming data. Most processes are unconscious. Consciousness mostly constructs explanations after the fact, rather than directing events. Emotions are not an obstacle to thinking. They are a component of decision-making, without which a decision never closes.

State. The baseline state is set by the background — daily habits, environment, sleep — not by individual events. The brain adapts to whatever level of stimulation it currently has: peaks fade, the background remains. But the brain does reorganize itself throughout life, and the background can be shifted. Not by willpower, but by systematically changing the inputs.

Meaning. "Meaning," "value," "morality" are states of a particular kind of system. They have no external source. They cannot be found the way you find lost keys. But they can be studied, adjusted, and changed, because they are ours, and they are produced inside us.

That is the foundation. What follows is practice.


Author

Maksim Bolgarin

April 2026

maxbolgarin.com

m@maxbolgarin.com


References

1. Harris, S. & Craig, W.L. (2011). Is Good from God? Debate at University of Notre Dame, April 2011. Recording: youtube.com

2. Craig, W.L. (1979). The Kalām Cosmological Argument. London: Macmillan.

3. Descartes, R. (1641). Meditationes de Prima Philosophia (Meditations on First Philosophy). Critique of the cogito: Williams, B. (1978). Descartes: The Project of Pure Enquiry. Penguin.

4. Diogenes Laërtius. Lives of the Eminent Philosophers, IX.88–89. Modern treatment: Fogelin, R. (1994). Pyrrhonian Reflections on Knowledge and Justification. Oxford University Press.

5. Hume, D. (1739). A Treatise of Human Nature, Book I, Part III. Modern analysis of the problem of induction: Howson, C. (2000). Hume's Problem: Induction and the Justification of Belief. Oxford University Press.

6. Quine, W.V.O. & Ullian, J.S. (1970). The Web of Belief. New York: Random House. See also: Quine, W.V.O. (1951). Two Dogmas of Empiricism. The Philosophical Review, 60(1), 20–43.

7. Kant, I. (1781). Kritik der reinen Vernunft (Critique of Pure Reason). Accessible introduction: Gardner, S. (1999). Kant and the Critique of Pure Reason. Routledge.

8. Husserl, E. (1913). Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie (Ideas Pertaining to a Pure Phenomenology). Introduction: Zahavi, D. (2003). Husserl's Phenomenology. Stanford University Press.

9. Wittgenstein, L. (1921). Tractatus Logico-Philosophicus, proposition 7.

10. McGinn, C. (1989). Can We Solve the Mind–Body Problem? Mind, 98(391), 349–366.

11. Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173–198. Accessible introduction: Nagel, E. & Newman, J.R. (1958). Gödel's Proof. New York University Press.

12. Marshall, B.J. & Warren, J.R. (1984). Unidentified curved bacilli in the stomach of patients with gastritis and peptic ulceration. The Lancet, 323(8390), 1311–1315. Nobel lecture: Marshall, B.J. (2005). nobelprize.org

13. Kearns, C.E., Schmidt, L.A. & Glantz, S.A. (2016). Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents. JAMA Internal Medicine, 176(11), 1680–1685.

14. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.

15. Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. See also: Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B, 360(1456), 815–836.

16. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Book: Clark, A. (2015). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.

17. Gazzaniga, M.S. (1998). The Split Brain Revisited. Scientific American, 279(1), 50–55. Book: Gazzaniga, M.S. (2011). Who's in Charge? Free Will and the Science of the Brain. Ecco/HarperCollins. On the interpreter: Gazzaniga, M.S. (2000). Cerebral specialization and interhemispheric communication. Brain, 123(7), 1293–1326.

18. Nisbett, R.E. & Wilson, T.D. (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84(3), 231–259.

19. Libet, B., Gleason, C.A., Wright, E.W. & Pearl, D.K. (1983). Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential). Brain, 106(3), 623–642.

20. Schurger, A., Sitt, J.D. & Dehaene, S. (2012). An accumulator model for spontaneous neural activity prior to self-initiated movement. Proceedings of the National Academy of Sciences, 109(42), E2904–E2913.

21. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

22. Damasio, A. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York: Putnam. On somatic markers: Damasio, A., Tranel, D. & Damasio, H. (1991). Somatic markers and the guidance of behavior. In H.S. Levin, H.M. Eisenberg & A.L. Benton (Eds.), Frontal Lobe Function and Dysfunction, 217–229. Oxford University Press.

23. Brickman, P., Coates, D. & Janoff-Bulman, R. (1978). Lottery Winners and Accident Victims: Is Happiness Relative? Journal of Personality and Social Psychology, 36(8), 917–927.

24. Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S.J. & Frith, C.D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, 97(8), 4398–4403.

25. Chalmers, D. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2(3), 200–219. Book: Chalmers, D. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Continue reading