[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Multiple Languages v. Multiple Paradigms

Greetings All,

    This is my first posting and I am still chasing down all of the pointers in
the previous discussions, so I'll try to keep this fairly brief.

On 3/6/02 at 3:50 PM, mvanier@cs.caltech.edu (Michael Vanier) wrote:

> I would absolutely program in multiple languages if it was feasible,
> sufficiently transparent and efficient.  I think that .NET is a first
> tentative step in the right direction, but it hasn't solved the problem by
> any stretch of the imagination.  When I can write low-level code that needs
> to be maximally efficient in C++, routine infrastructure code in a java or
> ML dialect, and script the system in python or scheme, and all without
> paying a huge runtime cost, I'll be one happy programmer.  I don't see this
> happening for some time yet.
> Mike

    Mike's comments and the subsequent postings have left me with a couple of

    In seeking the ability to program in multiple languages, which level(s)
matter most?

        a) Surface Structure / Syntax
        b) Library Routines provided by each language's standard library(s)
which might be taken as primitives vis-a-vis the multiple language user 
        c) Actual Evaluation Semantics
    Are multiple languages, in the sense of discrete runtime implementations
necessary, or could their advantages be achieved in say Scheme with the addition
of an appropriate foreign function interfaces and a small number of special
forms and syntactic macros to support the semantic differences among the
supported languages?

    We have seen a number of translators mapping assorted languages to C code
calling a runtime library to provide dynamic functionality. 

    Would it be feasible to go in the opposite direction and design a new
dynamic language that would support the combination of blocks having alternate
evaluation semantics and then use multiple parsers to support "skins" to read
source code written in multiple languages? 

    If one were to do so, could one not then also provide varying levels of
automatic translation among the languages depending on the semantic distance
between them? (Imagine for example, finding an old journal article with a
program written in APL with its custom typeface and being able to have "the
system" translate it into something closer to Common Lisp or C++ depending on
one's programming background.)

    Or might it be better to arrive at a single clean multiparadigm language
with enough semantic flexibility to provide the 'requisite variety' needed to
make it feasible for library authors to re-code their work by hand to run on a
new hybrid platform?

     Timothy A. Budd hints at this sort of solution in his, "Multiparadigm
Programming in Leda" (ISBN 0-201-82080-3). If anyone has any thoughts about his
approach, I for one would be most interested in hearing them.

Warmest Regards,


Peter J. Wasilko, Esq.        Director, The Continuity Project
     J.D., LL.M.				

Its time to abandon brittle architectures with poorly factored
interfaces, gratuitous complexity, and kludged designs dominated
by sacrifices on the alter of backwards compatibility.

Such artifacts are vulnerable to cyber-attack, weigh down the
economy costing trillions of dollars in lost productivity, and
suffer from an impoverished conceptual model that lacks the
integration and elegance needed to support the hypermedia
applications and libraries of the future.

The Continuity Project - Pursuing Secure, Simple, Supple, 
	and Sophisticated Systems to Unlock Our Human Potential