[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

text processing as *the* problem




Congratulations on LL1.  I'm glad that language developers can get together
and share ideas.

As a language user, I am looking for a language that I can fall in love
with,
and have been following the appearance of new languages for several years.
However, there is a problem space that seems neglected, and that is
text processing.

I am well acquainted with regular expressions and the sort of work that
can be done with Perl, for example, but it does not have the sort of
*feel* that I am looking for.

My objection to the regular-expression approach is that it is a low-level
approach.
It is not far removed from number crunching as a computing activity.
String-processing seems like an afterthought in language design.

Are there any languages, even big languages, that were *built* with
text processing in mind?  Are there approaches that are not limited
to an implementation of regular-expression matching?

The sort of problem I often need to solve is to extract the links and
accompanying
text from a web page, but only from a certain part of the page.  I would
like to
be able to easily program some processing rules, such as "ignore tables that
contain forms"  or "only collect links that begin with '/news/'".

I've also encountered problems in parsing XML that have required some
"heavy lifting" in terms of string comparisons that I have had to implement
in C.  All the while something inside cries out that it shouldn't be so hard
to do.

Kevin Kelleher