[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: error correction (was circular graphs)



On Mon, Jun 23, 2003 at 01:22:29PM -0400, Anton van Straaten wrote:
> Guy's example had both an embedded "type definition" - the upfront
> declaration that the code involved a "circular structure" - as well as
> sample input & ouput values.  So there was quite a bit of redundant
> information to support error-checking, to the point where the exact syntax
> of the code in question was almost unimportant:

> It's as if the code read (mapcar #'+ x (circular-list 1 -1)).
How did you infer that the mistake was at this particular point?
Maybe because it was the place where "new" notation was introduced
together with a "new" concept?
How did you determine what was "new" in this context?
Why did you prefer to think that the implementation rather
than the specification was the correct thing?
Are programs usually top-down in term of intent-to-extent?
When you assemble big programs from components, where is the "top"?

One day, I'll have to read that Shapiro book about programming
as debugging the empty program...

> Another aspect of our ability to extract Guy's meaning from his example is
> that we were able to determine which of the conflicting information was
> correct (assuming awareness of the conflict).  How did we know that the use
> of the phrase "circular list" and the provided input->output mapping was
> correct, as opposed to the code fragment?  That required an understanding of
> the context and broader intent.  Now we're talking about some pretty strong
> AI.
As far as "AI" goes, I'm not convinced that this particular problem
is particularly strong. Understanding of the context supposes that
the computer should build context, i.e. follow the dynamics of the
"conversation" or program-building, rather than just try to work on
the current state of the program with no sense of direction about the
past and future of program development. As for intent, well,
as you pointed out, the specification given by Guy in terms of examples
seemed formal enough to be understood by a computer. I am reminded
of Hofstadter's analogy-finding programs in this respect, as well as
various programming-by-example systems, and of PAC-learning systems
based on minimizing algorithmic complexity so as to abstract the given
examples into a generic formal specification. Has anyone heard of any
experiment remotely like this for interactive program development?

Yes, bootstrapping algorithmic expertise about building expert algorithmic
systems is my pet obsession.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
Artificial intelligence is what we don't know how to do yet
	-- Alan Kay