Objects as epistemological artifacts
To follow on a few arguments I had at some blogs like Gödel’s Lost Letter, vetta project, Machine Learning (Theory) and The n-Category Café I think I need to clarify the matter about my personnal stance on "objects" and ontologies, this also for my own benefit.
I am strongly anti-platonist because all evidence points to the irrelevance of metaphysical postures for the actual practice in mathematics and engineering.
There is no need to question and look and check for the actual existence of objects and concepts (which are also objects, supposedly abstract and immaterial) "in reality" because objects and concepts are NOT part of reality but part of our representation of reality (whatever one's view of reality is).
We shoudln't fear to be lacking of any sort of objects as representations (including pink unicorns) because as shown by the "existence" of the Rado Graph any fancy (countable) structure can be found.
The only trouble we can have is improper use of language (mathematical or otherwise) in our attempts to pin down "an object", that is a lack of definedness, mistakenly expecting that a sentence (or any sort of syntactical construct) actually points (denotes) an object (one and the same) in a consistent manner.
As I hinted the only purpose of objects is to serve as carriers of properties in our discourse.
Think about it for a moment, how could we organize any display of information if it were not possible to refer to the "same thing" at two or more distinct places in a discourse of any kind?
This is why objects must be intemporal (eternal), we already have enough trouble with the shifting of meanings due to the fuzziness of actual communication practices without having an indeterminacy "in principles".
This leads the Platonists to believe in the existence of abstract objects as some ghostly duplicates (non-physical and non-mental) of material objects, this is only a clumsy projection of folk intuition.
Objects are referents not "things" in the lay meaning of the word.
Furthermore ANY POSSIBLE OBJECT potentially exists, any piece carved out of the Rado Graph can serve as a referent, a label, a pointer in a discourse, being both recognizably distinct and unique (up to isomorphism).
Trying to sort out which objects "exist" like the Platonists do is devoid of any meaning, because all do exist.
Which doesn't mean that we should have an interest in every of them or will ever meet them all.
As René Thom said "Truth is not limited by falsity, but by insignificance".
Therefore what's the point with ontologies?
Ontologies are not just lists of "existing objects" they necessarily involve some language with wich they define the objects and their relationships.
And this is the valuable part of ontologies, they establish the basis for some discourse.
They also enforce a somewhat arbitrary partition of the "reality" they aim to be about, what is known in linguistic as the Sapir-Whorf Hypothesis.
However there is no one true right ontology it all depends on the problems at hand and even for the same one problem (or class of problems) there are many possible ways to "ontologize" it, like foundational problems in mathematics.
What makes the difference is the convenience of the ontology relative to the questions sought for.
This is why quasi-religious haggling about the "right way" to talk or think about this or that is pretty pointless.
Yet, shifting our perspective (swapping/altering ontologies) is something we do so naturally and with so much ease (if not rigor ) that we forget how critical it is for our thinking process.
The many different proofs of any given theorem require such "translations" between sligthly different perspectives, at least with respect to the lemmas used in the proof which, though may be defined inside a same general framework, are not necessarily related in an obligatory manner. And, may I remind you, lemmas ARE OBJECTS ON THEIR OWN, they have names, they can be recognized and beside terminology quarrels they have unicity.
It is this ability to build objects tailored to a purpose which is the key to our ability to deal "intelligently" with the world, what Barwise & Etchemendy call heterogeneous reasoning.
As far as I know The Mutilated Checkerboard problem is still NOT solved in AI except by brute force because it requires (so called...) human creativity in choosing a clever approach, i.e. shifting the perspective.
This is why I object to the simplistic view of Marcus Hutter and als, that AI is about sequence prediction.
I also deem all "foundational quarrels" in mathematics entirely irrelevant.
What needs to be done is to figure out what we are exactly doing when we shift perspectives and juggle wit h ontologies, because WE DO IT (successfully...)
Submitted by Kevembuangga
onWednesday 18 November 2009 - 18:20:43 comment: 6
|nh @ 02 Dec : 08:34|
Reply to this
I agree that understanding how exactly humans shift perspectives and are able to "ontologize" in various ways is an important problem. It is, however, likely that this is an AI-complete problem, so that everything you've said here adds very little value to the discussion of how to make an AI.
Ultimately, simply dismissing present AI work is worth actually less than the flawed research you dismiss, because you aren't suggesting alternate routes to pursue in actually building a model or theory of intelligence. Your comments essentially amount to "You don't know how intelligence works. Also, I don't know how intelligence works! See ya later!"
I've ran into your comments on three different blogs now, and in every case they've been nothing but distraction. If you actually wanted to dismiss Hutter's AIXI as irrelevant, you would need to discuss, in details, how his research does not and will not provide insight into the "ontologizing" phenomenon. Furthermore, you would need to describe into what a successful research program would focus on. If you cannot do this, your comments amount to a blanket dismissal of a large body of work, which certainly won't convince the people conducting that research, because you haven't given them any idea for an alternate path to pursue.
I'm afraid I can't add to the discussion, because I'm still trying to tackle the mathematical background necessary to understand Hutter's (and others') work in detail. I do think the field of AGI is the best approach we have at the moment, so if you're serious about actually seeing a working AI, please contribute. Otherwise, you might as well not even comment.
| || |
@ 03 Dec : 11:18|
Reply to this
I'm afraid I can't add to the discussion, because I'm still trying to tackle the mathematical background necessary to understand Hutter's (and others') work in detail.
Though you "agree" with it you don't seem to understand my point either, so there isn't much we can argue about.
| || |
|nh @ 05 Dec : 12:13|
Reply to this
Your "point" seems knocking down any and all approaches to AI without discussing their faults in specifics, while simultaneously failing to suggest a more fruitful area of research (not to mention repeatedly engaging in ad hominem attacks on AI researchers, calling them all "monkeys" in several comments).
At this point I'm willing to believe that you are not actually interested in AI, and are merely trolling AI internet discussions. Good luck with that.
| || |
@ 05 Dec : 18:16|
Reply to this
Your "point" seems knocking down any and all approaches to AI without discussing their faults in specifics
Have you met my favorite quote about AI:“If AI has made little obvious progress it may be because we are too busy
trying to produce useful systems before we know how they should work.”
This isn't my own prose, this is from an AI researcher in 1983
, nothing new under the sun.
Actually there are no "faults" just delusions, it is not that any bit of existing or previous AI research cannot be of some use
it is that by not trying to investigate what could be
the core problem a lot of time and resources are wasted efforts with very poor returns.
Part of this due to the hunt for funding
, in order to get some budget you have to convince the sponsors and alas the possible sponsors aren't necessarily the most enlightened, this results in "tragicomic" wastes like the Cyc project for which you certainly have seen my criticisms all over the place
A more serious impediment which I will try to explain is the quest for the definitive silver bullet
, there must be a "right way" to do AI, a theory of everything
like in physics.
For some it's sequence prediction (AIXI), for others analogy, or quantum logic, etc, etc...
It's both neither and all of these, we should certainly be able to deal with all such matters if AGI is reached, isn't it?
Look at this thread at The n-Category Café
, what are they trying to do?
To reach the "Holy Grail" of maths the ultimate "right" foundations
No chance, yet what they fail to notice is that during this very discussion
they are quite able to exchange valid arguments and make some progress despite the lack of formalization.How do they do that?
The same goes for AI.That's my point!
while simultaneously failing to suggest a more fruitful area of research
Not so, here is the best summary I made at some blog:The problematic phase for a computer to do AI isn’t even to recognise an object or concept, a dog or a verb, but to come up with the “idea” that tagging a whole bunch of “inputs” with some unique label is a good way to organize the chaos and build a model of “reality”. As long as the AI researchers will skip this question by doing the conceptualisation themselves instead of leaving it to the computer, they won’t have any useful insights about “intelligence”. Before the computer can go ahead with any “logic”, it has to recognize concepts, and before recognising concepts, it has to discover them; it should not be told that there is a dog or a verb “out there”.
(not to mention repeatedly engaging in ad hominem attacks on AI researchers, calling them all "monkeys" in several comments).
You are mistaken, it's not AI researchers I call monkeys, it's all of mankind
| || |
|nh @ 04 Mar : 10:52|
Reply to this
Coming back to this after three months, I now think you are largely correct and I retract what I previously said.
The summary you provided strikes me as particularly insightful, so I must ask if you could suggest some relevant papers or areas of study (besides the ones mentioned in your blog post, of course).
| || |
@ 10 Mar : 16:37|
Reply to this
for recanting and for your fine appraisal
As for relevant papers I don't have much more to suggest, I recently bought Meaning and Grammar
but haven't really dug into it.
Some works by Reinhard Diestel
on infinite graphs seem to me to possibly bring some insights into knowledge representation problems, because in the end I think the "key" to understanding what AI really is to be IS
a matter of representation, somewhere at the boundary between the formal and the informal (maths/logic versus language).
An interesting note by Dick Lipton
highlights the deficiencies of current math formalisms and by the same token the power of "informal" natural language: in spite of the informality they are able to meaningfully discuss the question in the blog post.
Actually I haven't done much about AI recently, having been bothered by other personal questions and willing to take a break by doing some more immediately rewarding hacking.
[ edited 10 Mar : 16:40 ]