AI Ramblings

Tuesday 07 April 2015

 Slowly approaching an AI breakthrough

From what I meet surveying AI news all around I think we are approaching a decisive insight about intelligence in general and AI in particular.
What has been hampering progress is obviously that we have no f**king idea about what intelligence IS or how it works despite all the bragging of the usual suspects vying for funding or the dire warnings of alarmists.
I think that all the ideas, clues and rummagings will coalesce into a better "big picture" within 3 to 15 years. This will NOT be the much vaunted/dreaded "hard take-off" of AI but only the beginning of actual AI research instead of the current mishmash of engineering, statistics and clever tricks that pass for AI.
As usual with this kind of breakthrough it will likely occur independently in maybe half a dozen places in the span of a few months.

Submitted by Kevembuangga
KevembuanggaonTuesday 07 April 2015 - 09:54:33
comment: 0

Friday 08 March 2013

 Ha! Ha! Looks like some progress at last...

Recent work by Kevin Knuth richly deserves a new post:
"We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention."
Not too surprising given Kevin Knuth previous works.
Submitted by Kevembuangga
KevembuanggaonFriday 08 March 2013 - 11:33:47
comment: 0

Saturday 15 September 2012

 Ontologies still progressing at snail pace...

And research is still headed to a dead end, however they slowly recognize the problems, quoting:

- Is there a universal ontology?
No. Different application domains require different levels of expressivity and abstraction in an ontology. While ‘universal’ attempts to capture ontological or common-sense knowledge such as the Cyc ontology have produced impressive results, they are difficult to adapt to specific purposes and application areas. The need for diversity is nowadays acknowledged even in the area of foundational ontologies that provide very high-level ontological vocabulary and axiomatisation, such as BFO, GFO, and Dolce—they are directed towards different application areas such as Biology, Medicine, or general Information Systems, and no easy integration is conceivable as they disagree on a high conceptual level. Moreover, real world applications of ontologies often require the heterogeneous combination of ontologies.

- Is there a universal ontology language?
There is no ‘one true ontology language’ that will fit all purposes. The choice of an ontology language is directly linked to a corresponding logic and a restriction to possible reasoning support, from very fast lightweight DL reasoning to (semi-decidable and semi-automatic) first-order and higher-order reasoning; this concerns in particular a trade-off between axiomatic expressiveness and dealing efficiently with large amounts of data (as e.g. in biomedical ontologies).

- Is there a universal ontology reasoning?
In the highly populated world of heterogeneous ontologies, not only the languages may vary from module to module, but also the way the modules interact with each other and the way the ‘flow of information’ [93] is controlled. This necessitates proper, semantically well-founded foundations for various kinds of inter-ontology links. Moreover, such a heterogeneous set-up obviously affects the various reasoning modes that can and need to be supported.
This includes combination of conceptual reasoning with, e.g., temporal, spatial, or epistemic information, as well as dealing with problems such as inconsistency tolerance employing paraconsistent inference, non-monotonic inference dealing with changing facts that need to be accommodated, or fuzzy and approximate reasoning in cases where ‘precise’ reasoning is either too expensive or just undesirable."

Submitted by Kevembuangga
KevembuanggaonSaturday 15 September 2012 - 19:49:20
comment: 0

Wednesday 20 June 2012

 Fed up...

By unsolved parsing problems which I duly encountered (as anyone else) when hacking some AI software I got to tackle the problem myself.
Thanks to an excellent idea from Richard WM Jones for bypassing the "parser bootstrapping" question (which parser do you use to parse your grammar definition?)   I managed to create a very small parser which solves both the grammar ambiguity problem and the awkwardness of semantic hooks in LR parsers.
Welcome to the LRTT parser!
The Left to Right Tree Traversal parser just does what it says, unambiguously running the semantic actions in a strict left to right order.
KevembuanggaonWednesday 20 June 2012 - 17:44:17
comment: 1

Monday 07 February 2011

 News from the AGI War Front

Thru the hustle and bustle of overoptimistic  and/or scaremongering blather about AGI and the Singularity  a more reasonable notion is slowly emerging.
There will not be any Singularity (or it already happened) and AGI may not be what you think.

Of special interest are :
The Undesigned Brain is Hard to Copy by Kyle Munkittrick.
A New Direction in AI Research by Monica Anderson.

In response to some comment at I gave my own summary which I reproduce below.
Computation only occurs over an already encoded model of reality , that is, the bits shuffled around by the computation(s) have a meaning which relates to some ontology of concepts and objects in/of the model.
It is the buildup of this ontology which is the problematic step which is glossed over by current AI research.
This does NOT refers to some would be “mystical properties” of the brain or neurons but to the feature extraction phase of any problem solving method: What are the relevant features you have to sift out of the raw mass of data inputs to solve the problem at hand?
E.g. before you set to solve a chess problem you have to “know” what the pieces and the chessboard are and what are the rules.
In chess this is a given which comes with the problem statement, in any AGI problem, beside possibly the goal, the relevant concepts are NOT part of the problem statement (score best in “this environment”) they have to be created.
Of course this is ALSO a matter of computation(s) but WHICH kind of computations?

Best summarized by Nick Szabo:
The most important relevant distinction between the evolved brain and designed computers is abstraction layers.

Being “turing complete” buys you no edge in this game...

Update: IBM  Watson jeopardy player defeating Jennings and Rutter triggered another frenzy of idiotic  comments and undeserved awe about "Our New Computer Overlords" , actually it is not so much an advance in AI than an improvement on technology in capacity, speed and accuracy.

See discussion at Dick Lipton and John Langford blogs.

Technical overwiew Building Watson An Overview of the DeepQA Project (via Carson Chow).

Submitted by Kevembuangga
KevembuanggaonMonday 07 February 2011 - 19:10:14
comment: 2

Go to page       >>  

Almost right... Kevembuangga @ (06 Nov : 19:42) (Misc)
The smell of Strong AI Kevembuangga @ (15 May : 19:04) (Misc)
Roger Schank defects Kevembuangga @ (06 Jan : 17:48) (Misc)

News Categories