AI Ramblings

Friday 08 March 2013

 Ha! Ha! Looks like some progress at last...

Recent work by Kevin Knuth richly deserves a new post:
"We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention."
Not too surprising given Kevin Knuth previous works.
Submitted by Kevembuangga
KevembuanggaonFriday 08 March 2013 - 11:33:47
comment: 0

Saturday 15 September 2012

 Ontologies still progressing at snail pace...

And research is still headed to a dead end, however they slowly recognize the problems, quoting:

- Is there a universal ontology?
No. Different application domains require different levels of expressivity and abstraction in an ontology. While ‘universal’ attempts to capture ontological or common-sense knowledge such as the Cyc ontology have produced impressive results, they are difficult to adapt to specific purposes and application areas. The need for diversity is nowadays acknowledged even in the area of foundational ontologies that provide very high-level ontological vocabulary and axiomatisation, such as BFO, GFO, and Dolce—they are directed towards different application areas such as Biology, Medicine, or general Information Systems, and no easy integration is conceivable as they disagree on a high conceptual level. Moreover, real world applications of ontologies often require the heterogeneous combination of ontologies.

- Is there a universal ontology language?
There is no ‘one true ontology language’ that will fit all purposes. The choice of an ontology language is directly linked to a corresponding logic and a restriction to possible reasoning support, from very fast lightweight DL reasoning to (semi-decidable and semi-automatic) first-order and higher-order reasoning; this concerns in particular a trade-off between axiomatic expressiveness and dealing efficiently with large amounts of data (as e.g. in biomedical ontologies).

- Is there a universal ontology reasoning?
In the highly populated world of heterogeneous ontologies, not only the languages may vary from module to module, but also the way the modules interact with each other and the way the ‘flow of information’ [93] is controlled. This necessitates proper, semantically well-founded foundations for various kinds of inter-ontology links. Moreover, such a heterogeneous set-up obviously affects the various reasoning modes that can and need to be supported.
This includes combination of conceptual reasoning with, e.g., temporal, spatial, or epistemic information, as well as dealing with problems such as inconsistency tolerance employing paraconsistent inference, non-monotonic inference dealing with changing facts that need to be accommodated, or fuzzy and approximate reasoning in cases where ‘precise’ reasoning is either too expensive or just undesirable."

Submitted by Kevembuangga
KevembuanggaonSaturday 15 September 2012 - 19:49:20
comment: 0

Wednesday 20 June 2012

 Fed up...

By unsolved parsing problems which I duly encountered (as anyone else) when hacking some AI software I got to tackle the problem myself.
Thanks to an excellent idea from Richard WM Jones for bypassing the "parser bootstrapping" question (which parser do you use to parse your grammar definition?)   I managed to create a very small parser which solves both the grammar ambiguity problem and the awkwardness of semantic hooks in LR parsers.
Welcome to the LRTT parser!
The Left to Right Tree Traversal parser just does what it says, unambiguously running the semantic actions in a strict left to right order.
KevembuanggaonWednesday 20 June 2012 - 17:44:17
comment: 1

Monday 07 February 2011

 News from the AGI War Front

Thru the hustle and bustle of overoptimistic  and/or scaremongering blather about AGI and the Singularity  a more reasonable notion is slowly emerging.
There will not be any Singularity (or it already happened) and AGI may not be what you think.

Of special interest are :
The Undesigned Brain is Hard to Copy by Kyle Munkittrick.
A New Direction in AI Research by Monica Anderson.

In response to some comment at I gave my own summary which I reproduce below.
Computation only occurs over an already encoded model of reality , that is, the bits shuffled around by the computation(s) have a meaning which relates to some ontology of concepts and objects in/of the model.
It is the buildup of this ontology which is the problematic step which is glossed over by current AI research.
This does NOT refers to some would be “mystical properties” of the brain or neurons but to the feature extraction phase of any problem solving method: What are the relevant features you have to sift out of the raw mass of data inputs to solve the problem at hand?
E.g. before you set to solve a chess problem you have to “know” what the pieces and the chessboard are and what are the rules.
In chess this is a given which comes with the problem statement, in any AGI problem, beside possibly the goal, the relevant concepts are NOT part of the problem statement (score best in “this environment”) they have to be created.
Of course this is ALSO a matter of computation(s) but WHICH kind of computations?

Best summarized by Nick Szabo:
The most important relevant distinction between the evolved brain and designed computers is abstraction layers.

Being “turing complete” buys you no edge in this game...

Update: IBM  Watson jeopardy player defeating Jennings and Rutter triggered another frenzy of idiotic  comments and undeserved awe about "Our New Computer Overlords" , actually it is not so much an advance in AI than an improvement on technology in capacity, speed and accuracy.

See discussion at Dick Lipton and John Langford blogs.

Technical overwiew Building Watson An Overview of the DeepQA Project (via Carson Chow).

Submitted by Kevembuangga
KevembuanggaonMonday 07 February 2011 - 19:10:14
comment: 2

Wednesday 18 November 2009

 Objects as epistemological artifacts

To follow on a few arguments I had at some blogs like Gödel’s Lost Letter, vetta project, Machine Learning (Theory) and The n-Category Café I think I need to clarify the matter about my personnal stance on "objects" and ontologies, this also for my own benefit.
I am strongly anti-platonist because all evidence points to the irrelevance of metaphysical postures for the actual practice in mathematics and engineering.
There is no need to question and look and check for the actual existence of objects and concepts (which are also objects, supposedly abstract and immaterial) "in reality" because objects and concepts are NOT part of reality but part of our representation of reality (whatever one's view of reality is).
We shoudln't fear to be lacking of any sort of objects as representations (including pink unicorns) because as shown by the "existence" of the Rado Graph any fancy (countable) structure can be found.
The only trouble we can have is improper use of language (mathematical or otherwise) in our attempts to pin down "an object", that is a lack of definedness, mistakenly expecting that a sentence (or any sort of syntactical construct) actually points (denotes) an object (one and the same) in a consistent manner.

As I hinted the only purpose of objects is to serve as carriers of properties in our discourse.
Think about it for a moment, how could we organize any display of information if it were not possible to refer to the "same thing" at two or more distinct places in a discourse of any kind?
This is why objects must be intemporal (eternal), we already have enough trouble with the shifting of meanings due to the fuzziness of actual communication practices without having an indeterminacy "in principles".
This leads the Platonists to believe in the existence of abstract objects as some ghostly duplicates (non-physical and non-mental) of material objects, this is only a clumsy projection of folk intuition.
Objects are referents not "things" in the lay meaning of the word.
Furthermore ANY POSSIBLE OBJECT potentially exists, any piece carved out of the Rado Graph can serve as a referent, a label, a pointer in a discourse, being both recognizably distinct and unique (up to isomorphism).
Trying to sort out which objects "exist" like the Platonists do is devoid of any meaning, because all do exist.
Which doesn't mean that we should have an interest in every of them or will ever meet them all.
As René Thom said "Truth is not limited by falsity, but by insignificance".

Therefore what's the point with ontologies?

Ontologies are not just lists of "existing objects" they necessarily involve some language with wich they define the objects and their relationships.
And this is the valuable part of ontologies, they establish the basis for some discourse.
They also enforce a somewhat arbitrary partition of the "reality" they aim to be about, what is known in linguistic as the Sapir-Whorf Hypothesis.
However there is no one true right ontology it all depends on the problems at hand and even for the same one problem (or class of problems) there are many possible ways to "ontologize" it, like foundational problems in mathematics.
What makes the difference is the convenience of the ontology relative to the questions sought for.
This is why quasi-religious haggling about the "right way" to talk or think about this or that is pretty pointless.

Yet, shifting our perspective (swapping/altering ontologies) is something we do so naturally and with so much ease (if not rigor ) that we forget how critical it is for our thinking process.
The many different proofs of any given theorem require such "translations" between sligthly different perspectives, at least with respect to the lemmas used in the proof which, though may be defined inside a same general framework, are not necessarily related in an obligatory manner. And, may I remind you, lemmas ARE OBJECTS ON THEIR OWN, they have names, they can be recognized and beside terminology quarrels they have unicity.
It is this ability to build objects tailored to a purpose which is the key to our ability to deal "intelligently" with the world, what Barwise & Etchemendy call heterogeneous reasoning.
As far as I know The Mutilated Checkerboard problem is still NOT solved in AI except by brute force because it requires (so called...) human creativity in choosing a clever approach, i.e. shifting the perspective.

This is why I object to the simplistic view of Marcus Hutter and als, that AI is about sequence prediction.

I also deem all "foundational quarrels" in mathematics entirely irrelevant.

What needs to be done is to figure out what we are exactly doing when we shift perspectives and juggle wit h ontologies, because WE DO IT (successfully...)

Submitted by Kevembuangga
KevembuanggaonWednesday 18 November 2009 - 18:20:43
comment: 6

Go to page       >>  

Almost right... Kevembuangga @ (06 Nov : 19:42) (Misc)
The smell of Strong AI Kevembuangga @ (15 May : 19:04) (Misc)
Roger Schank defects Kevembuangga @ (06 Jan : 17:48) (Misc)
Is competition a problem? Kevembuangga @ (20 Dec : 10:02) (Misc)

News Categories