Keirsey's Game

Science is my religion, and I have faith in reason.

by David M. Keirsey
(under construction)

We assume the universe is rational, that is, there exists some rationality within the universe.  Otherwise, science is hopeless, and we can all go home.  Logic entails relations and things.

To understand why Keirsey's game.  I suggest first reading Robert Rosen's Life Itself and Anticipatory Systems.  Second, reading a book on Mathematical Logic, such as Mendelsohn or Kleene would be good.  Boolean logic and understanding Godel and NP completeness is useful.  Reading the dictionary and looking up the roots of words and definitions of words for a couple of years may help.   Assuming you are very familiar with Langton, Crutchfield, and Kauffman, reading a book such as Lovelock's Gaia or Lynn Margulis's Microcosms, along with Lee Smolin's Life of the Cosmos would be good.  Being familiar with Prigogine's work helps alot.  Knowing a lot of mathematics helps, but the more you know, maybe more confusing (that's the problem with being a mathematician).  Being familiar with John Baez's analysis of things would help also, since between Robert Rosen and John Baez, they have deconstructed of most of mathematics and hard science.  Deconstruction of philosophy (and religion) can be done if you know enough about Kant and Hume, and knowing a bit of Hegel would be good.  (Story of Philosophy by Will Durant and The History of God by Karen Armstrong are good overviews)

You could simply view Keirsey's game as trying to keep on two edges of edge-of-chaos (interstices?), between chaos and order of semantics and syntax of concepts (philosophy and mathematics) and chaos and order of semantics and syntax of percepts (science and religion).  The point is not to get in love with syntax (as Hilbert did and current particle physicists) or semantics (as the philosopher's and religionist's do).

Keirsey's game is a building of abstract relations attached to "meaning": in other words:  English words. One of guiding principles of the game is the rationally and empirically assign English words to a mathematical category.  The assignment is *ambiguous* in that the word is assigned to several categories (visa versa), but in actuality the assignment is *unambiguous* for the Context disambiguates the word (just as in English sentences).  Moreover, the assignments are very principled in that the words are *only* assigned in three contexts, so the ambiguity is very precise (as opposed to natural language -- where words have any number of uses).  Initially as the game proceeds, the English words don't "mean" much, in fact they "mean" only was is assigned (by naming), nothing more.  To understand the "game" you might try substituting "nonsense" words (or integer numbers)  with the words used.  But to remove all meaning to this game is to make Hilbert's mistake.  And to think that "words" are arbitrary is to miss the point also.  On the other hand, we are trying to get away from chasing words or thinking the power is in the words.

Keirsey's game is a method of rigorous analysis and synthesis of logical models that try to mirror natural systems.   The method is a mixture of *My own brand* of category theory, word analysis (using computer science, etymological analysis, mathematical logic), and the comparative analysis of natural systems  (its useful to know as much science and history (physics, chemistry, biology, economics, computer science, mathematics, psychology, philosophy etc) as one can cram in your brain), using Robert Rosen's insights in the analysis and synthesis of "models" (he calls it the modeling relation).

I start from very simple concepts: in fact I start with the simplest concept (and most complex), the concept (and percept) of "no".   In the notion of categories, I am using *finite* categories, building up from the most "finite" one Category-0 to build up other categories.  Of course, in the best Hegelian tradition, the assumption of the "finite" implies the "infinite", but for important epistemological reasons I keep my synthesis *explicitly* finite, to be precise.  And keeping track of things one should use both boolean numbers and integers to attach to the words and categories, to make things understandable and to realize the depth of complexity in the analysis or synthesis (or more importantly, the lack of it), and again to be precise.   Keirsey's game is a monolog in the form of a dialog.  The point of the game is to better understand concepts (i.e., words and language about the world) and how the world works. The game, as a game itself has no point.  In the beginning, everything seems arbitrary and rather vacuous, which it is.  But there is method to the madness.  For example, one application is: one can better understand an exact relationship between a notion of KIND and a notion of DEGREE.  Another example is understanding the relationship between CHAOS and ORDER (in the science of complexity meaning).

The point not to miss when reading the following dialog is that *all* dialectics are *equivalent* (in another sense the same) in the simplest context The same dialectic in a more complex context (as in a more complex essay, a more complex model, or more complex category) can acquire "more" meaning. The following dialog can viewed as first level of analysis and synthesis of *the* dialectic (for example 0, 1, infinity). In which all "opposites" share a relation to (whether it be:  good vs evil, god vs not god, universe vs nothing, hot vs cold, smart vs dumb, rich vs poor, fermion vs boson, plant vs animal, procaryote vs eucaryote, life and non-life).

Now one should also realize that this is a variation on Sheffer's stroke (a NAND gate).  One can generate an infinite number of "states" by producing a set of logic ONLY using NAND gates (or logic).  The difference here, from building a "universal" Turing machine (see Rosen for its problems) and symbolic logic, is we are interested in attaching semantics (that is words) is to understand and relate the discourse to our "reality".  For example, one might be able to relate the word "degree" as in the meaning of scale, to the word "kind" as in the meaning of category such that which word is more "complex" (in Rosen's meaning) and how many "bits" are the difference.  For example, it is fairly clear that the notion of "degree" is more complex (in my terms) than the notion of "kind", for to fabricate the notion of "degree"  one must use the concept of kind.

The fun actually starts when you try to combine two dialectics, but first things first.

"If you don't understand something said, don't assume that you are at fault."  David W. Keirsey

Dialog 0

So, in the beginning:
We have no categories

In other words, assert no category:  Assume Category-0

Category-0 is the null category.  There are no objects or mappings in the category.

Call Category-0 by another name (the English word):  no
NO <==> Category-0

For amusement lets call this the concept of "no existence"  This concept requires no context except itself, but does imply existence.

Category 0 has no objects and no relations

Why no?  Because that's the way it is, by positing it becomes. (Besides the world (existence) presupposes the concept of non-existence).

Logic allows us fabricate something else, let's call it Category-1  (to entail via final cause, its NOT Category 0, that is NOT no.)  Certainly because of ourselves and because self is "not nothing".

Why Category-1?  (answer:  Why not? or No no.) (Rosen would say it's the first dualism)

H(A,B) signifies the power category of A, B  (All mappings between the objects of A to B, where A and B are sets or categories)
Let Category-1 be H(category-0, category-0)   Let us assign it the word not
NOT <==> Category-1

Category 1 has no objects, and a null relation.
Category 1 contains Category 0. (But is not the same as Category 0)
Category 1 is *the* expansion of Category 0, Category 0 is *the* reduction of Category 1

So what are type of  objects of Category-1?  There are no objects or relations in Category-0, therefore there are no objects in Category-1.  What are the relations of Category-1: there is only one relation possible:  the null relation.  No has no relation to no: other than itself, which is *the* no relation (which in this case is the identity relation).

Lets look at the notion of Not no.  What is H(Category 0, Category 1)?

Since neither Category 1 or Category 0 have objects there are no mappings between them, except the null mapping, which is one mapping: which is equivalence relation to H(Category 0, Category 0).  Which is not, itself: Category 1.

Lets look at the notion of no not.  What is H(Category 1, Category 0)?

Since neither Category 1 or Category 0 have objects, and Category 0 has no relations.   Which is no, itself:  Category 0.

So let's fabricate a new category, call it Category-2.  Now we have a choice, we can assign this new symbol to one of three things.
H(Category 1, Category 1)
H(Category 0, Category 1)
H(Category 1, Category 0)

Since we showed that H(Category 1, Category 0) is equivalently reduced (or symbolized) to Category 0, this case uniquely.  In otherwords, Category 0 symbolizes Category 1 when Category 0 is in the containing context of Category 1. We showed that H(Category 0, Category 1) is equivalently expanded (or symbolized) to Category 1.  Note the ambiguity of the meaning of "symbolized."  In other words, Category 1 symbolizes Category 0 in the limited context of Category 0, which is a reduction or a category can symbolize another category by expansion.  By combining words we can represent these H(x,y) depending on the sentence, therefore we don't "need" a new word for them.

Let us assign Category 2 to H(Category 1, Category 1) and we will call this with the word negation.
Negation <==> Category-2

We have three words:  no, not, negation.   These words symbolize (point to) concepts. And at the same time we have assigned them to mathematical categories. The word "no" primarily as a noun: representing the concept of negation without a context.  However, the word "no" can be used as an adjective,  as in "no thing".  Thus, the word "no" is ambiguous, for it has two meanings, in some sense, depending on the context.  In language we show that context by sentences.  In English, the word no and not are semantically "equivalent".  The only difference comes when there is a context, however "no" does not directly imply a context (however all things imply a context, in reality) as opposed to "not" which implicitly implies a context, for one would rightly ask, "not what?".  Thus there is very little difference between "not action" and "no action".  (There is a difference, hence the two words)

Category-2 has one object which was derived from mapping of the null relation of Category 1 to null relation of Category 1, and one identity relation.  Since the objects of Categories are generalized (that is, they have no properties other than what is associated with the category), Category 2 contains Category 1, which contains Category 0.

Now,  in normal English, if one says "Not no"  I would immediately think "yes".   But "Not no" could be argued that it means "No" in the bad form of the emphatic "Not nohow-- no way, buddy."  In the ultimate reduction context, "Not no" is No, just as negation is no relation.

End Dialog 0

First, Let's deconstruct Dialog 0 by first displaying only syntactic information of the dialog

View 1:
Category-0, Category-1, Category-2

View 2:

View 3:
0 00 000

View 4:

View 5:
0, H(0,0), H(H(0,0),H(0,0))
H(0,H(0,0)), H(H(0,0),0)

View 6:
0: no,
H(0,0): not; no
H(H(0,0), H(0,0)): negation; not
H(0, H(0,0)): not no
H(H(0,0), 0):  no not

Now, Let's deconstruct Dialog 0 by talking about the notion the dialectic and generation of philosophic treatises.

Consider the following sets of words

Good, Evil.
God, No God
Hot, Cold
Universe, Nothing
Rationality, Irrationality
Fermion, Boson
Foobaz, No Foobaz
Foobaz, Wabit.

When someone uses words to convey a set of ideas, as in a scientific treatise, philosophic treatise, or religious treatise what is one doing?  The writer is using language to evoke something in the reader's head.  Of course, language is a rhetorical device, and the goals of the writer with respect to the reader may or may not be accomplished.  All words ambiguous but through the construction of the treatise (including the syntax of the language), the writer is usually trying to be more precise in what he is conveying.  Some concepts and percepts are rather concrete and the reader easily understand the meaning, like "Go down to your local Kmart store", versus some more "abstract" like "The Father is the Godhead of all Creation".  On the other hand, what does "go down" mean, does it mean by car, or by foot. And what does the Kmart sentence mean to a kid learning English in Timbuktu?

The point is the function of language is to convey meaning, but that meaning depends on the both the reader and the writer.  So you can't be sure of what the message is, in the mind of the reader, when you write some treatise.  So are the philosopher,  essayist, or scientist condemned to arguing their case just as the religious leaders or politicians do?  Are Hawking, Hilbert and Hegel on the same shaky ground as Hitler and Mao Tsetung.  Yes, but there is another way out.

In the above dichotomous words, I am using semantics (i.e., referring to external references outside the dialog by using the readers knowledge of words) to "gain meaning".  However, I included "nonsense" words, foobaz and wabit to "strip" the meanings, so the reader instead of looking at words like "god", "fermion", "universe" as meaning what they mean to him (obviously depending on the knowledge of the reader) more like a set of arbitrary words that are meant to represent the notion of "opposite".  Ah, but what does the word "opposite" mean?

Clearly in a particular treatise, given two particular "opposites" such as "god" vs "satan" or "0" vs "1" have some external referents which the essayist has in his mind, and he hopes the reader has similar referent.  The rest depends on how well the essayist refers to outside world by other words and the particular syntax (the construction of treatise in interpretable form).
However, combining syntax and semantics in a particular way, makes it clear what the relations are between the semantics (of the reader) and the semantics (of the writer) assuming you use a clear enough syntax.

If the semantics (the uses of the word: its symbolized meanings) can be controlled precisely, in that the ambiguity of that word (for example the word "no") can be clearly spelled out and agreed between writer and reader (which includes the situation of the writer and the reader being the same), then we can create precise dialogs which explicate meaning in relation to existence.

For example, take the simplest word "no",  it is ambiguous.  Is one talking about "nothing, which represents the concept of complete absence irrespective of anything" or "not nothing, but something else".  By assigning it "meaning" by the two meanings (nothing and negation  => 1) null or 2) not something) and recognizing the "depth" of the meaning in context by assigning the number of bits needed to represent this concept relative to all other implicit concepts.   By relating words, such as no <=> not,   not <=> negation,  negation <=>  self, in a precise way (using context explicitly) one can fabricate language.  The second part is to relate this fabrication by analysis and relating the various concepts via precise analogy ala Rosen's category theory.

For example, consider the following table
Category-0   Category-1 Category-2   Category-0a Category-1a Category-2a
1 2 3 4 5 6
0 00 000 0000 00000 000000
0 (00000) 00 (0000) 000 (000) 0000 (00) 00000 (0) 000000 ()
0 (00) 000   00 (0) 000 000 () 000 000 0 (00) 000 00(0) 000 000
no not negation self it identity
no not negation self it
not negation self it identity thing
self it identity no not negation
it self negation not no identity
kind not kind unity kind not kind unity
0 0 0 not degree degree amount