Thursday, October 8, 2020

The Bible is a communications model.

The following is a response to the question: If an idea like Walsh's "Peasant Revolt" Theory is true, is the narrative in the Hebrew Bible “false”?  Why?  In what sense?

From a certain point of view all abstractions are inherently false. For example, if I were to show you a communications model like this...

3 Ways to Choose a Powerful and Effective Communication Style -  DonCrawley.com

...you might be able to gather some insight into how human communication works. However, this isn't literally how communication works and there are processes occurring which are far too complicated for us to grasp. We, then, judge a model not by it being 100% accurate, but instead by how insightful it is or isn't. Does it offer insight into the subject matter at handle, to the process of model making itself, or into the human condition? Models are a form of abstraction, and the same questions apply. Abstractions which hold many insights are the ones which are valuable to engage with, as long as they are engaged with responsibly. 

Both Walsh's Peasant Revolt Theory and the Hebrew Bible are abstractions of history. They don't fully capture all of history, but are instead zoomed out pictures. The question becomes, which holds insight?

PRT Insight into Abstraction-Making: I love historical information that asks us to rethink how we conceptualize the past. Any abstraction (as long as it get used responsibly, as I'll get to in a moment) which shifts our perspective can provide value.

PRT Insight into Human Condition: There's something beautiful about disparate groups of marginalized people overthrowing an oppressive power structure and uniting into a group which would go on to change the course of history in so many ways. 

PRT Insight into Subject Matter: It seems that the PRT gives us some potential literal insight into what happened, physically, in a way that HB does not. Assuming the PRT was formed intelligently, revealing the origins of an important religious and ethnic group is pretty insightful.

HB Insight into Abstraction-Making: Myth-as-abstraction is an incredibly useful tool. It makes the abstraction more engaging and potentially speaks to us on a deep psychological level that a chart simply cannot.

HB Insight into Human Condition: The Hebrew Bible is a toolset in learning to trust God in the face of despair and pain. It's so powerful in that regard, and I reckon that the contents of the Bible were instrumental for many in the face of the trauma of the Shoah. 

HB Insight into Subject Matter: The Bible offers us a ton of insight into how many Jewish people see themselves, both historically and psychologically. I don't think one could understand Jewish culture without engaging with the Bible.

So it turns out that both abstractions hold insights. That's cool. The only important thing is that we engage with both of them responsibly.

For the PRT I think the key in engaging responsibly is to not shove it in people's faces as a way to "prove" their religion to be false. Because, as I've pointed out, all abstractions are false and at least under the umbrella I'm working from we're only concerned about the value they offer.

For the HB things are a little more complicated and it's such a thorny issue that I'm going to take a step back and talk about two other topics, real quick I promise.

1) Qanon and the modern conspiracy theorists. 

Do these conspiracy theories, these abstractions, hold any insight? Of course they do! Let's look at a simple one: "Hillary Clinton is a lizard person."

Be real, she's kinda a lizard person. I find Clinton to be one of the most interesting people in the modern political sphere, and I do have love and empathy for her because I'm a spiritually minded person. I wish she had won over Donald Trump and despite my dissatisfaction for her policies, I would have been happy to elect the first woman president. However, the idea of calling her a "lizard person" is funny because she is a bit cold, a bit distant, and she's got that scale-like armor. It's an abstraction, and it's false, but it does hold insight.

However, this conspiracy theory and others like it do not get used responsibly. Because adherents believe that she is a lizard person literally and because she's drinking the blood of children literally they vote against her for a person who wasn't qualified to hold stewardship over the country. Also note the parallels between the blood drinking beliefs of Qanon believers to anti-Semitic beliefs, as we discussed in the lecture.

2) The Christian crusades, killing people to take the "Holy Land" and all murders in the name of scriptural abstractions all read like irresponsible use of them.

--

I avoid using the Hebrew Bible as an example because I'm not equipped for it and because those not equipped for it often stray into anti-Semitism (either on accident or because they are dog whistling). I will note that using the HB abstraction as a justification for oppressing Palestinians seems irresponsible to me, but I would rather point towards Jewish writers tackling the subject rather than me being a bull in a religious studies china shop.

--

All of this is to say that I don't believe the question of "false" to be particularly useful as a black/white label as all abstractions hold falsities. Instead, we should focus on what insights an abstraction gives us, where those insights are valuable or not and in what fields they are valuable in, and how to use them responsibly. There have been innumerous Jewish people who have read the Hebrew Bible and have extracted immense insight from it and used those insights in wonderful ways. In this way, the abstraction of true/false doesn't hold much insight.

Sunday, October 4, 2020

HELP I’M TRAPPED INSIDE A COMPUTER: The Chinese Room Argument

The Chinese Room argument tries to explain that computer programs, nor any form of physical symbol system, can be intelligent outside of the context of the hardware it is running on. This is to say that Strong AI cannot exist on modern computers while Weak AI can.

Imagine a room. In that room is an English-speaking man with a giant book of instructions and a set of cards which holds every possible Chinese character. The man has a way to receive messages from the outside world, communicated in Chinese symbols. He doesn’t speak Chinese, but his big book of instructions contains English-written instructions which have pictures of Chinese characters and instructions of which of his cards he should use to respond to each message. It just so happens that his instruction book is written really well, so well in fact that every time he receives and sends a message the response seems coherent. For example, if the Chinese-speaking outsiders send in a message which asks, “What did you think of the fifth Fast & Furious movie?” his response would be, “I loved the chase sequence with the cars and the giant safe,” even though both messages were in a language the man cannot understand. 

Or, can he understand it? That is the core question brought up by the Chinese Room Argument. If one believes that he cannot speak Chinese, as John Searle (the creator of the thought experiment) believes, then this shows that Strong AI cannot exist solely from a physical symbol system (1980). The metaphor is as follows: physical symbol systems work on a process of input/processing/output. Computer programs do this, the human brain does this, and the Chinese Room does this. However, as the thought experiment tries to show, it is possible run a computer program which engages with language, but which does not “understand” (more on understanding later) it. The man in the room, alongside his equipment, is replacing the hardware which a program would run on, but his instruction book is a computer program for all intents and purposes as they both run on a process of strict instructions, such as if/then/and.

Searle thinks about language through the context of intentionality. Human beings who are speaking a language have a purpose and semantic understanding of what they are saying. For example, someone who is asking about Fast Five knows what the film is, and they know what a film is, and they know that other people have thoughts about them. Someone who is responding about their opinion on Fast Five has developed their opinion based on an evaluation of their qualitative experience with the film. Use of language is not purely syntactical, when people talk to each other Searle thinks they are doing more than just following a set of rules. This is because humans are not just a program, they are a “program” ran on a specific set of hardware which creates intentionality. The brain is a semantic making machine and physical symbol systems on their own are not. According to Searle, without semantic understanding, syntactic understanding is not sufficient for language comprehension (1980). Given the assumptions of this argument, for us to create Strong AI we would also have to create Strong Hardware which is equivalent to the human brain and no discussion of intelligence would exist only on the computation and algorithmic levels.

I am not convinced by the Chinese Room argument. I am also not convinced by the counter points. I find the whole discussion to be rather semantically driven. I’m not convinced that brains and computers are synonymous in any meaningful way. I’m not familiar with any branch of science that so strongly uses a metaphor or analogies, outside of the way that humans talk about psychology. As far as I’m aware there’s no, “Solar systems are constructed like molecules,” because they, “are structured, linked by forces, and act holistically,” or similarly constructed theories in physics or chemistry. The most famous metaphorical theory I’m aware of is the use of the “string” metaphor in string theory, but most of the physicists I know and/or follow are pretty dismissive of its relevance today. However, the Chinese Room Argument is a metaphor which is trying to take down another metaphor, the cognitive scientist view that the mind is a form of software and the brain is a form of hardware, so it's boring analogy all the way down.

I do want to talk about a potential argument against the Chinese Room that hasn’t been brought up yet in any of my reading: what if we live inside a computer simulation?

Searle believes that a mere program could not achieve general intelligence, and this includes a program which tries to digitally simulate the physical processes of the brain. He proposes an alternative thought experiment where the man in the Chinese Room manipulates a series of complicated pipes as a response to the input Chinese characters. Once he manages to get the pipes into the correct sequence, they are rigged to give out an appropriate Chinese output. In this instance, the physical aspects of the brain are represented as the series of pipes, but the man and the pipes “certainly” don’t understand Chinese (1980).

Enter philosopher Nick Bostrom. Bostrom believes that there is a strong chance that we are living inside a computer simulation right now (2003). The argument goes like this: let’s imagine that humanity manages to survive into a “posthuman” age where we have achieved a technological jump beyond our current levels of imagination; we as a species have done this before per the jump from the bronze age to now. This would likely include very powerful computing technology. If we were to reach that stage, there is a high likelihood that we would be using our computing technology to simulate our evolutionary history. In fact, we would likely run thousands upon thousands of different simulations to gather as much information as possible. Let’s imagine that these simulations are powerful enough to simulate the human brain, something Searle admits may be possible but still dismisses as “not understanding.” For the sake of argument, pretend that the posthuman society is running 3,000 simulations at any given time. This means that there is one “real world” of human brains and 3,000 simulations of human brains going at any given time. Statistically speaking, then, we are not living in the 1/3,001 worlds in which we eventually reach a posthuman stage, but instead probably in the 3,000/3,001 worlds in which we already reached a posthuman stage and then simulated it backwards. We are probably some pretty Strong AI.

If we live in a simulated world, would Searle argue that he and I and nobody in the world holds understandings or intentionality? Maybe from some sort of existential perspective he’d be right, but then he’d be crafting an argument around the concept that he doesn’t understand anything, which largely makes his point moot. If nothing else, this argument gives me a bit of empathy for the man inside the Chinese Room. What is his name? How long as he been in there? Does he have access to the bathroom? What is his concept for God?

Citations

Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly (1950), 53(211), 243-255. Retrieved October 4, 2020, from http://www.jstor.org/stable/3542867

Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

Van Manen, H., Atalla, S., Arkhipov-Goyal, A., Sweijs, T., Hristov, A., Zensus, C., & Torossian, B. (2019). Macro Implications of Micro Transformations: An Assessment of AI’s Impact on Contemporary Geopolitics (pp. 20-23, Rep.). Hague Centre for Strategic Studies. doi:10.2307/resrep19557.4


Godly Expectations: Monasticism and Social Norm Dynamics

Amma Sarah of the Desert Mothers once rebuked a male monastic by saying, “It is I who am a man; and you are like women!”[1] In a similar sub...