Вы находитесь на странице: 1из 83

Lambda the Ultimate

Home forums LtU Forum

Concatenative Language Kont


An interesting project. Kont came out of an attempt to design a programming language in which every program was expressed in continuation-passing style. A concatenative language is a language in which programs are formed by the concatenation of smaller primitives, ala Forth. By Chris at 2005-08-08 16:17 | LtU Forum | previous forum topic | next forum topic | other blogs | 13661 reads

Comment viewing options


Select your preferred way to display the comments and click "Save settings" to activate your changes.

A comment about concatenative languages in general


I don't get why anyone considers concatenative languages to be suitable for high-level programming. How are you supposed to be able to remember or keep track of what is on the stack at any given point? How can you build reusable encapsulated abstractions? I can't imagine writing any nontrivial program in a concatenative language; it seems like a tour de force comparable to writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices. As low-level languages to be used as a compilation targets, I can kind of see the point. But enthusiasts for Forth, etc. seem to be saying that they write nontrivial application programs in these languages. How? By dhopwood at Tue, 2005-08-09 20:25 | login or register to post comments

With small, well-behaved words


(Note, I don't use any concatenative language, although I do use an HP-41 and have programmed it) The trick is simply to use small, functional words (i.e., local stack effects). That way, you only have to treat words as black boxes that take input and give output, aka functions. When you start having to track the stack, it's probably a good hint you need to break the word down in a couple more intuitive words (or that your interface for that word is extremely messy).

By pkhuong at Tue, 2005-08-09 23:21 | login or register to post comments

Lots of factoring
The trick is small, easily testable words. Any time you do lots of stack manipulations is a sign that you need to refactor. 'Thinking Forth' has lots of good advice in this area: http://thinking-forth.sourceforge.net Manfred von Thun's Joy pages also covers this in some of the articles. His one on the quadratic formula for example: http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-quadratic.html By doublec at Tue, 2005-08-09 23:28 | login or register to post comments

The quadratic example in that last URL is a good example...


... but I think it just supports my point.
-b +/- sqrt(b^2 - 4 * a * c) ----------------------------2 * a (define quadratic-formula (lambda (a b c) (let ([minusb (- 0 b)] [radical (sqrt (- (* b b) (* 4 ( * a c))))] [divisor (* 2 a)] ) let ([root1 (/ (+ minusb radical) divisor)] [root2 (/ (- minusb radical) divisor)]) (cons root1 root2)))))

The structure of the Scheme code directly reflects the structure of the formula. Indeed, there are only trivial notational differences between the formula and the code, as we can see by writing the formula with the same auxiliary variables and in prefix notation:
quadratic-formula(a, b, c) def= (root1, root2) where minusb = - 0 b radical = sqrt (- (* b b) (* 4 (* a c)))) divisor = * 2 a root1 = / (+ minusb radical) divisor root2 = / (- minusb radical) divisor

Compare that with the 2nd version in Joy (the one suggested to be "the best of the three in this note"):
quadratic-2 == ] [ [ [ pop pop 2 * ] [ pop 0 swap - ] [ swap dup * rollup * 4 * - sqrt ] ] [i] map ] ternary i [ [ [ + swap / ] [ - swap / ] ] [i] map ] ternary. # a b c => [root1 root2 # divisor # minusb # radical # root1 # root2

** Huh? ** I don't think it's just my lack of familiarity with Joy that is at issue here: the code structure is simply quite different from the formula. It's impossible to understand

what this code is doing without thinking about how it uses a stack that is not present in the original problem (and that could map to the problem in many different ways). The difference that stands out most is that none of the variables are explicitly named -the names on the right are only comments. These comments are not equivalent to using names in the code -- what if they were wrong? The variables have to be referred to by keeping track of their position on the stack (which is what I was getting at in the reference to de Bruijn indices above). OK, it is possible to mitigate this problem by minimizing the size of the stack at any given time, but the risk of making an error is going to be larger for any given number of variables than it is in an equivalent program that uses names. Some proportion of those errors won't be found by testing. For a large program, that is likely to add up to an unacceptable risk. [Edit: corrected "0 - b" to "- 0 b"] By dhopwood at Wed, 2005-08-10 03:51 | login or register to post comments

Re: Quadratic formula


In Factor, the quadratic formula is a non-problem. Just embed a domain-specific language for infix arithmetic and be done with it. It is very easy to write such a language that compiles down to concatenative code which then runs through Factor's native compiler. It already exists; see contrib/algebra/ in the distribution. About the rate of errors being higher; my experience is not conclusive either way. I have found that the time going from noting a bug, to resolving it, to fixing it is very short in Factor. Unit tests flow easily, and debugging is pure joy with tools like our GUI inspector, single-stepper, various tracing tools, and of course, the grand-daddy of them all, the REPL. Many stack errors can be caught statically, using the stack effect inferencer. I will admit, I have had a few several-day-long debugging marathons, however these had nothing to do with the stack, but issues such as buggy relocation in the compiler's code generator leading to clobbered code, or incorrect rewriting rules in the optimizer, and so on. On the other hand, the other large piece of software I maintain, jEdit is written in Java. Like all Java software, it is a pain to maintain, and I always find new, mysterious bugs that are almost impossible to resolve, due to Java's non-interactive, static nature, and laborous edit/compile/run cycle. By slava at Wed, 2005-08-10 06:39 | login or register to post comments

Not the point


Just embed a domain-specific language for infix arithmetic and be done with it. And then you're not writing the arithmetic in a concatenative language. The example just happened to use arithmetic, though; the criticisms (e.g. about the potential error-

proneness of the stack implicitly introducing variables without names) are not specific to that case. It is very easy to write such a language that compiles down to concatenative code which then runs through Factor's native compiler. Right -- I didn't say that concatenative languages aren't suitable as intermediate languages (although I wonder why you would use them in preference to a register-based IL, if you're targetting a register-based machine). I have found that the time going from noting a bug, to resolving it, to fixing it is very short in Factor. And in applicative languages as well, usually, if you notice the bug. ... jEdit is written in Java. Like all Java software, it is a pain to maintain, ... Did I inadvertently defend Java? By dhopwood at Wed, 2005-08-10 14:07 | login or register to post comments

"Higher-Order" intermediate language?


Maybe it could function this way, as something easier to reason about than a registerbased machine? It seems there are a few orthogonal ways languages like Forth differ from e.g. Scheme: * postfix vs. prefix * stack-manipulation vs. named-variables * eschewing of parentheses (though they might make for greater readability) Scheme & other languages (C) can of course use a stack; they just address it differently. [Anyone use a postfix language that uses named variables?] By dbfaken at Wed, 2005-08-10 15:31 | login or register to post comments

Monads
Just embed a domain-specific language for infix arithmetic and be done with it. And then you're not writing the arithmetic in a concatenative language. Let's try rewriting that a little bit differently... Just use monads (like an embeded domain-specific language) for sequencing sideeffects and be done with it. And then you're not writing functions in a pure language.

By Greg Buchholz at Wed, 2005-08-10 18:02 | login or register to post comments

I agree
I agree. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code, as though an expression such as:
(let ((x 33)) (foo) x)

...could return a value other than 33, if foo had a bug. This kind of non-local side effect for bugs is reminiscent of something like a pointer error. In other contexts, this sort of behavior is called "unsafe": that an abstraction is not protected from unauthorized outside interference. In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. By Anton van Straaten at Wed, 2005-08-10 15:48 | login or register to post comments

Incorrect
The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup What about names like let, labels, multiple-value-bind and setf in Common Lisp programs? They also have nothing to do with the problem domain. Is CL low level, "like assembler"? This kind of non-local side effect for bugs is reminiscent of something like a pointer error. In other contexts, this sort of behavior is called "unsafe": that an abstraction is not protected from unauthorized outside interference. Factor has a stack effect inferencer that could be used to check that foo does in fact return the right values. Suppose you write a function that takes a mutable object as input, and it violates its contract by mutating the object into an incorrect state; how is this any different from a function with an invalid stack effect? Are you going to argue in favor of purelyfunctional languages, which are solely the domain of academics?

By slava at Wed, 2005-08-10 16:11 | login or register to post comments

Incorrect pop Correct push


What about names like let, labels, multiple-value-bind and setf in Common Lisp programs? They also have nothing to do with the problem domain. Is CL low level, "like assembler"? First, I'm referring partly to frequency of occurrence. When you look at a piece of code, what stands out? If you see very few names that relate to the problem domain, and many names that relate to some abstraction in the programming language, that's a problem for the comprehensibility of the code. Second, many of the names you mentioned are used to introduce names that are relevant to the problem domain. As such, they're still more meaningful to the problem being solved than stack operators are. Even concatenative languages acknowledge that names are important, since they all support global names. What logic is used to justify the idea that you can only have global names? It boils down to something like "because it's easy to implement a language that way, and possible to get away with it and work around it". That's not a good argument from the language user's perspective. It might explain why you, as an implementor of such a language, are enthusiastic about them, though. Factor has a stack effect inferencer that could be used to check that foo does in fact return the right values. Could be used? That implies that this is not something which happens automatically, to transparently guarantee the safety of unrelated "bindings" (or whatever you want to call them in a concatenative language). If that's the case, then this creates an area that the user has to worry about, which would otherwise not need to be worried about. It's hard to see how that would be a benefit. Suppose you write a function that takes a mutable object as input, and it violates its contract by mutating the object into an incorrect state; how is this any different from a function with an invalid stack effect? It's different in that the caller knows that he's passed a mutable object to that function it's not a case where an essentially unrelated function can destroy an abstraction which it has no connection to. However, it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects. Having all programs depend implicitly on a stack which can be directly side-effected by any function is a step backwards in this respect. Are you going to argue in favor of purely-functional languages, which are solely the domain of academics? First, it's demonstrably false that they are "solely the domain of academics". Simple refutation: Erlang is used to run telecom switches, and it's a purely functional language. There are plenty more examples where that came from.

Second, while I wouldn't use purely functional languages in every context, I do use languages that encourage and support the controlling of side effects, and I do write code which avoids side effects for the most part. This has numerous provable benefits if you want to start arguing against them, I'd point you here. Lest I be taken as entirely negative about concatenative languages, I should mention that I've written systems in Forth, experimented with Joy, and written a toy Joy interpreter. I think they're very interesting languages, and can be fun and useful in certain kinds of applications, but I don't think that those languages can support the kind of claims you seem to want to make. If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-thetop criticisms of other, proven languages. By Anton van Straaten at Wed, 2005-08-10 17:02 | login or register to post comments

More incorrect claims


What logic is used to justify the idea that you can only have global names? I'm not sure what logic is used, because I don't follow it. Factor has variables and nested namespaces and scope. Did you even bother to look at it? Could be used? That implies that this is not something which happens automatically, to transparently guarantee the safety of unrelated "bindings" (or whatever you want to call them in a concatenative language). If that's the case, then this creates an area that the user has to worry about, which would otherwise not need to be worried about. It's hard to see how that would be a benefit. I could easily add some tools to do this automatically for every new word definition. Lest I be taken as entirely negative about concatenative languages, I should mention that I've written systems in Forth, experimented with Joy, and written a toy Joy interpreter. So your only experience with concatenative languages is having written a toy Joy interpreter? If I write a toy interpreter for a simple applicative language, like Scheme, then notice that this toy interpreter is hard to work with, should I conclude that applicative languages on the whole are useless? If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-the-top criticisms of other, proven languages. Show where I have been "blustering" and making over-the-top criticisms. By slava at Wed, 2005-08-10 17:09 | login or register to post comments

Is not/is so titles are boring

I'm not sure what logic is used, because I don't follow it. Factor has variables and nested namespaces and scope. Did you even bother to look at it? Well, we need to be clear about whether we're discussing actual concatenative languages in general, or Factor in particular. The discussion started out more generally. I've looked briefly at Factor, some months ago, but I didn't find that it sufficiently addressed the sort of concerns I've mentioned. Since it's still under development, it'll be interesting to see how it turns out over time. Regarding my point about global names, I didn't state that clearly. What I was referring to is that these languages demonstrate that they recognize that supporting safe bindings between names and values is important, but within individual procedures, they don't provide such a feature, i.e. temporary name/value bindings that can be relied on for the life of a procedure. Does Factor provide such a feature? I could easily add some tools to do this automatically for every new word definition. That sounds as though it would be useful. One question is that as you add such features, might you reach a point where it becomes apparent that simply addressing the issue at the language level is a good idea? Lest I be taken as entirely negative about concatenative languages, I should mention that I've written systems in Forth, experimented with Joy, and written a toy Joy interpreter. So your only experience with concatenative languages is having written a toy Joy interpreter? Is it your intent to get into a definitional argument now, and say that Forth is not concatenative? If so, then I'd suggest we abandon the poorly-defined term "concatenative" and switch to something like "direct stack manipulation languages", which is more meaningful in any case. I claim that the combination of my experiences with Forth and Joy, combined with knowledge of programming language semantics and experiences with a variety of other languages, makes it possible for me to make comparisons and draw conclusions related to certain properties of these languages. Show where I have been "blustering" and making over-the-top criticisms. I've already responded to specific cases, but some of the quotes in question are as follows: This is a richer level of functionality than offered by most languages discussed on this web site one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. Are you going to argue in favor of purely-functional languages, which are solely the domain of academics? By Anton van Straaten at Wed, 2005-08-10 19:55 | login or register to post comments

You're just trolling


I've already responded to specific cases, but some of the quotes in question are as follows:

You see, the very first post to this thread was your disparaging claim that concatenative languages are unsuitable for developing applications. Instead of discussing the Kont language, you decided to troll instead. You don't even bother refuting my points, and instead draw blanket conclusions and generalizations based on your limited experience with Joy and Forth. I know you wouldn't be very happy if I went and jumped into threads where discussion of ML and Haskell is taking place, claiming that static typing is overly restrictive. So why do you do the equivalent thing here? By slava at Wed, 2005-08-10 22:29 | login or register to post comments

Substance, please
You see, the very first post to this thread was your disparaging claim that concatenative languages are unsuitable for developing applications. I responded to another post which made a point which I agreed with. I said concatenative languages show a particular sign of being a low-level language. I believe that's true, and I've defended that in exchanges with others. I'm not even sure if you disagree, since you haven't addressed my points, but I assume from your general reaction that you disagree. Instead of discussing the Kont language, you decided to troll instead. I'm not trolling. However, I noticed that instead of discussing the Kont language yourself, you decided to talk about Factor. I'm having difficulty discerning the rules you think I should be following here. You don't even bother refuting my points Which points are those? Afaict, I've responded to every one of your substantive points. I suggest you reread the thread more carefully. and instead draw blanket conclusions and generalizations based on your limited experience with Joy and Forth. You're free to dispute my points, with rational arguments, but lashing out at academia, applicative languages, "most languages discussed on this web site", and research languages doesn't qualify. It's particularly ironic given that your own language would, by some definitions, qualify as a research language. I know you wouldn't be very happy if I went and jumped into threads where discussion of ML and Haskell is taking place, claiming that static typing is overly restrictive. Heh. Your assumption is incorrect, assuming that what you have to say has more basis in rational argument than your comments on those languages so far. So why do you do the equivalent thing here?

This is a forum for discussion of programming languages. I'm interested in the semantics of programming languages, and among other things, how those semantics relate to the user experience of programming. My original post in this thread wasn't intended as a critique of Kont specifically (and besides, Brian Mastenbrook has already heard my opinion on this issue). Rather, it was an attempt to characterize one of the semantic issues that distinguishes concatenative languages from languages with namebased procedure-local abstractions (a.k.a. local variables). I believe that this issue is one of the factors which underlies a common negative user reaction to these languages. As someone interested in concatenative languages, I would think you might have some interesting thoughts on the matter, but for whatever reason, you haven't chosen to offer those yet. Other people have, though, which is the reason I'm here. By Anton van Straaten at Thu, 2005-08-11 00:20 | login or register to post comments

Let me reiterate those substa


Let me reiterate those substatiative points, if you don't understand Slava (even though he's being perfectly clear). First, I'd just like to remind you that you won't convince Slava or me to stop using concatenative languages because we're perfectly comfortable with them, no matter how low-level you prove them to be.

I said concatenative languages show a particular sign of being a low-level language. I don't think concatenative languages are more low-level. They just handle things differently, in a way that other languages happen to use as an intermediate form. This does not mean at all that it is worse, just different. If a C compiler uses an intermediate form that is single assignment only, does that mean that C higher-level than any kind of single assignment form? Of course not.

Rather, it was an attempt to characterize one of the semantic issues that distinguishes concatenative languages from languages with name-based procedure-local abstractions (a.k.a. local variables). I believe that this issue is one of the factors which underlies a common negative user reaction to these languages. How are local variables abstractions? Could you explain that? How are they more abstract and flexible than stack shuffling operators? By Daniel Ehrenberg at Thu, 2005-08-11 17:33 | login or register to post comments

reference sense pop


First, I'd just like to remind you that you won't convince Slava or me to stop using concatenative languages because we're perfectly comfortable with them, no matter how low-level you prove them to be. You sound almost as though you're nervous I might succeed! ;) But I really have no interest in changing your choice of programming language. No, the point is to understand more about programming language features and their consequences. Of

course, this can lead to changing preferences, programming style, etc mine have certainly changed a great deal over the years. But more importantly, a good understanding of PL features and how they affect programs and programming can make you a better programmer in any language. It can also help in assessing the appropriateness of a language in a given situation, and help with cutting through hype about particular programming languages. I don't think concatenative languages are more low-level. They just handle things differently, in a way that other languages happen to use as an intermediate form. This does not mean at all that it is worse, just different. That particular issue is different in a way that's more low-level. This has been covered in a few posts, and connected to Alan Perlis' observation that low-level implies that "programs require attention to the irrelevant". The irrelevant thing in this case is direct stack manipulation, which you've acknowledged: "there are only a few cases where the stack really matters". You've suggested elsewhere that the same is true of lexical names, but I've addressed this in my response to your last point below. If a C compiler uses an intermediate form that is single assignment only, does that mean that C higher-level than any kind of single assignment form? Of course not. C code is certainly usually higher-level than any internal single-assignment form it's likely to be compiled to. The way you can tell that is to look at the extra operations that are being performed in the single-assignment code, to deal with the constraints of single assignment the fact that those extra operations aren't necessary in the C source indicates that they are irrelevant to the expression of the original problem, and indicates that the single-assignment form is lower level. If your concern is with generalization, i.e. drawing conclusions about "...any kind of single assignment form", that's not what I'm doing. I'm looking at specific languages that regularly embed direct stack manipulation operators in their code, and concluding that this is a lower-level way of dealing with abstraction over values than using names, in most cases. How are local variables abstractions? Could you explain that? How are they more abstract and flexible than stack shuffling operators? A local variable is an abstraction in the same way that any name is an abstraction. The name refers to some other thing, and we can use the name and manipulate whatever it refers to without knowing which specific thing it refers to. That's abstraction. When you push a value on a stack and later apply some operation to that value, that's also a kind of abstraction. However, not all abstractions are equal. Direct stack manipulation alone doesn't scale very well, as every concatenative programmer already knows. As procedures get bigger, keeping track of values via stack manipulation gets more complex. Early on in the thread, we were reminded that "any time you do lots of stack manipulations is a sign that you need to refactor." And what do you do when you refactor? You create names, which (implemented properly) are more scalable, safer abstractions than storing values on a stack.

So one question is, if names are so useful and important when code scales up past a single procedure, mightn't they be useful locally, within a procedure? And of course, many languages demonstrate that they are. One of the critical features of names is that they can actually serve two purposes: 1. A name acts as a way to refer to a value, i.e. a name is a "reference", or can be said to "denote" some value. 2. A name can provide information to programmers about the value, or its role in the computation, i.e. a name can also have a "sense" or "connotation". This why we try to pick informative names, and why the expression you gave elsewhere in this thread was compiling dataflow optimize linearize simplify generate, instead of a b c d e f. Defined appropriately, both expressions could have the exact same computational effect, but one is much more meaningful to us than the other. When we omit names from our code, we're omitting one of the only things which connects the code to the problem it's solving, and that's something that should be done thoughtfully. If a language commonly forces you to omit names, that's something worth taking a close look at. Using a stack to abstract values hardly ever serves the second purpose above. However, naming every value isn't necessary or appropriate, either. In the expression quoted above, it's not necessary to name the intermediate values, and most languages support this sort of omission of names in one form or another. But omitting names is one thing; replacing them with operators that manipulate an abstraction (the stack) that's not otherwise directly relevant to the computation is something else entirely. Using stack shuffling operators is a low-level detail no matter how you look at it, and wherever this replaces the use of names which might be informative, you have a double loss of the program's ability to express its algorithm to the human reader. In general, it's difficult to argue that expressing most problems in terms of an essentially LIFO stack is an approach that makes sense in and of itself. The most convincing arguments, I think, have to do with the benefits of that approach to language implementation: by pushing some of the work of arranging the abstraction mechanism onto the user, the job of the language implementation is simplified, which can have various benefits. I'm sure there's plenty in what I've written above for a concatenative fan to object to. Keep in mind I've programmed in these languages, and I'm familiar with the arguments. I could make them myself: you don't need local names because factoring keeps procedures small enough to be tractable; non-local names used in conjunction with the stack locally works out very well most of the time; languages which force you to name e.g. function return values just to be able to rearrange them to pass to another function are similarly obscuring the solution to the problem; etc. etc. My main interest is in characterizing the differences in a more specific way than just saying something along

the lines of "I don't like this", or "I think this is a fun way to program". There are obviously differences, but if we can't describe their effect clearly other than just listing the technical features, that implies a gap in our understanding. By Anton van Straaten at Fri, 2005-08-12 03:55 | login or register to post comments

2Variable
In #comment-8723, Anton wrote: So one question is, if names are so useful and important when code scales up past a single procedure, mightn't they be useful locally, within a procedure? And of course, many languages demonstrate that they are. I'm not sure if the following quote supports your contention. I ran into it while skimming through Forth Sorts Fruit Down Under by Paul Frenger in ACM SIGPLAN Notices, vol. 40, no. 8, aug 2005 (not yet online): The CF [ColdForth] manual has other interesting chapters to describe its features: numbers/parameters (CF supports local variables via stack frames), the memory manager, I/O handling (mostly for accessing files), strings, object-oriented extensions, TCP/IP capability and the multi-tasker, for example. By el-vadimo at Wed, 2005-09-14 12:11 | login or register to post comments

2drop
Even concatenative languages acknowledge that names are important, since they all support global names. What logic is used to justify the idea that you can only have global names? It boils down to something like "because it's easy to implement a language that way, and possible to get away with it and work around it". That's not a good argument from the language user's perspective. It might explain why you, as an implementor of such a language, are enthusiastic about them, though. Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. However, it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J?

Simple refutation: Erlang is used to run telecom switches, and it's a purely functional language. Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-the-top criticisms of other, proven languages. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. By Mackenzie Straight at Wed, 2005-08-10 23:38 | login or register to post comments

Stacks of responses
Aside from the fact that several concatenative languages do actually support local variables Do those local variables protect against the issue I've raised, i.e. are they safe from being changed by unrelated stack operations? the reason they're frowned upon is because in most cases they are an impediment to proper factoring. Is that an issue specific to the concatenative languages? Otherwise, I find it hard to see how names are an impediment, in languages with traditional local variables. Can you give an example? In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. I don't see how using "push" and "pop" to abstract values helps address that, in most cases. There are cases where a stack is certainly the right abstraction to use, of course. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. I didn't say that. I said that "it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects." I would include many imperative languages in that list, including e.g. C++, Java, Ada languages which provide qualifiers like "const" that allow the programmer to guard in a meaningful way against unwanted side effects, not just on individual variables but also over larger scopes, such as declaring that a method cannot side-effect the object it's associated with. Controlling of side effects is a major theme in very demanding systems, not just at the language level but in the design of systems. Functional languages just implement this at the language level on a more comprehensive basis. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? Hopefully my response above gives a better idea of what I was getting at. Obviously, "power" is one of those things, like "expressiveness", which is difficult to formalize satisfactorily, but it could perhaps be characterized as the ability to handle very demanding tasks, preferably demonstrated in practice. The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J?

I consider tacit programming which has strong similarities to point-free style to be higher-level than direct stack manipulation, for the reasons stated elsewhere in this thread. However, all of these things have pros and cons. The cons tend to have to do with being more difficult to reason about, especially as expressions scale up. Ideally, a language should be able to support a variety of styles, so that the appropriate one can be picked in a given situation. Of all of these, I don't see direct stack manipulation as one that's likely to be the best choice in most situations. On the subject of scaling up expressions, the traditional concatenative answer to that is that you should factor larger expressions into multiple words. However, that means you're relying on names for abstraction again, just on a scale greater than the local procedure. How does this reconcile with the idea that using local names is an impediment to refactoring? Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). Define it however you like very functional? Do you consider Haskell purely functional, even with the I/O monad and the things it can talk to? My real point was that a strong emphasis on avoidance of side effects is not something that's solely the domain of academics. I consider the definition of "purely" here to be a quibble e.g. anything that interacts with a traditional database is ultimately impure, when considered as a whole system. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. That's a fair point, although if you focus purely on how local value abstraction works, I can see his point. Having the option to use de Bruijn indices might be an improvement over having no alternative to direct stack manipulation, and I don't think it's over the top to say that. Also, the original poster did also make some other real points. By Anton van Straaten at Thu, 2005-08-11 01:18 | login or register to post comments

What's in a name?
Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. Programmers, and typical programming styles in various languages, differ considerably in how many named intermediate variables they use (personally I find the number of intermediates in the above Scheme code a bit excessive). But it's generally agreed, I think, that introducing variables with useless names is bad style. In functionalapplicative languages, it's almost always possible to avoid this. The need for temporaries with useless names, e.g. loop indices, in imperative-applicative languages (some more than others) is indeed a valid criticism of them. By dhopwood at Thu, 2005-08-11 01:26 | login or register to post comments

A temporary by any other name...

The need for temporaries with useless names, e.g. loop indices, in imperativeapplicative languages (some more than others) is indeed a valid criticism of them. I disagree that temporaries have "useless names". Loop indices may not be referenced other than as a control structure, but part of the essence of reasoning within an imperative paradigm is knowing the current state. Not being able to name part of that state is what I would consider to be a weakness. The only useless names I can think of is for function arguments that aren't used. And in better languages, you don't actually have to name those. ;> By David B. Held at Thu, 2005-08-11 03:52 | login or register to post comments

Functional languages can "nam


Functional languages can "name part of the state". Do you consider not being able to not name part of the state a weakness? The only way to avoid it in many imperative languages would require programming in a dramatically different style than the the language supports. By Derek Elkins at Fri, 2005-08-12 17:36 | login or register to post comments

Types?
I agree. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code That sounds like a plea for static typing. ...every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? Especially in the presence of macros?
(apply quadratic-formula '(1 0 -4)) (apply quadratic-formula (reverse '(1 0 -4))

By Greg Buchholz at Wed, 2005-08-10 17:08 | login or register to post comments

More Linear
For better reference to linear lanugages (which also mentions the quadradic formula!) see Linear Logic and Permutation Stacks--The Forth Shall Be First By Greg Buchholz at Wed, 2005-08-10 17:44 | login or register to post comments

Differences
Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lower-level. That sounds like a plea for static typing. Yes, it is. The question is what the costs of introducing it are. The costs of introducing a safe name abstraction are negative. The costs of introducing static typing are positive, and arguably not insignificant. In the static typing case, I find the costs high enough that I can easily justify using it in some cases, and avoiding it in others. In the case of relying primarily on direct interaction with a stack to store values, I can easily justify avoiding that in most cases. Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? It's different at least in the sense that that's a local effect: the error is confined to the interface between the caller and the callee, and doesn't compromise unrelated abstractions elsewhere in the program. I'm not saying that any language with named bindings solves all such problems, I'm just saying that there's a spectrum of safety and levels of abstraction, and that a language with named bindings which can make some guarantees about those bindings is both safer and higher-level than a language that doesn't have those features. By Anton van Straaten at Wed, 2005-08-10 18:10 | login or register to post comments

low vs. lower


I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lower-level.

Well I don't know if I can define "low-level" either, but of the cuff, I'd point to higher order functions and a clean algebra of programs as indicators of higher-levelness. Maybe the assembly language remark was aimed more at Forth and less at something like Joy? By Greg Buchholz at Wed, 2005-08-10 18:37 | login or register to post comments

The Joy of obfuscated abstractions


Well I don't know if I can define "low-level" either, but of the cuff, I'd point to higher order functions and a clean algebra of programs as indicators of higher-levelness. Maybe the assembly language remark was aimed more at Forth and less at something like Joy? I agree that HOFs and algebraic compability are high-level indicators. That moves a language like Joy up the spectrum I mentioned, but it doesn't necessarily make up for all of its problems. The Joy code sample near the beginning of this thread provides an example of low-level code, by my definition and Alan Perlis'. By Anton van Straaten at Wed, 2005-08-10 20:04 | login or register to post comments

Not low level


Though I don't particularly like that Joy code, concatenative languages do provide both of those things in your definition of high level languages. HOFs are ubiquitous in Joy, Factor and Kont, renamed combinators. They are used more frequently than HOFs in any applicative language I've seen. As for theory, there's tons of concatenative theory. Check out the Joy and Factor websites for more details, but really, there's tons of this stuff. By Daniel Ehrenberg at Thu, 2005-08-11 21:43 | login or register to post comments

"Low level"
I'll admit I can't give a formal definition of "low-level" off the top of my head There's always Alan Perlis's definition: A programming language is low level when its programs require attention to the irrelevant. Although I don't think it's really completely correct to label languages high or low level. At most you can say that writing in a given language generally results in high or low level code. Anyway, by that definition, concatenative code is low-level because it forces you to write your solution in terms of a stack, despite the fact that there's no stack in your problem specification. Likewise, monads result in low-level code, because you're forced to think about category theory just to print "hello" :-)

By Dan Winship at Wed, 2005-08-10 19:30 | login or register to post comments

I agree with everything you w


I agree with everything you wrote. :) By Anton van Straaten at Wed, 2005-08-10 20:05 | login or register to post comments

Analogy...
In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. Sounds a lot like type errors in dynamically checked languages... But now I guess I'm the one trolling... ;) By Matt Hellige at Wed, 2005-08-10 17:31 | login or register to post comments

The semantics of Joy


Indeed, (let ((x 33)) (foo) x) can return something other than 33; specifically, it can return if foo does not terminate. Likewise, executing pop on an empty stack likewise produces . This interpretation depends on the view that the words of concatenative languages are functions from sequences to other sequences, which is Manfred's fundamental insight. Joy is different from other concatenative languages because it is functional: it is impossible to make any changes of state except through the file functions (the fact that read, write, etc. do not pop the file object off the stack constitutes the concatenative analogue of the I/O monad). Global words can be defined at read time, but cannot be changed at run time; thus they are analogous to function definitions rather than global variables. By johnwcowan at Wed, 2005-08-10 20:40 | login or register to post comments

The Joy of the next 700 titles


Indeed, (let ((x 33)) (foo) x) can return something other than 33; specifically, it can return if foo does not terminate. True, but irrelevant, since this problem applies to all useful general-purpose languages, so doesn't affect a relative comparison of features between languages. Likewise, executing pop on an empty stack likewise produces Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not

empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated. By Anton van Straaten at Wed, 2005-08-10 20:50 | login or register to post comments

Static vs. Dynamic


The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. Is that a problem caused by concatenativityness (!), or by dynamicism? If your language allows you to push 1 or 2 things on the stack, depending on a conditional, then I'll agree it has the potential to become a mess. But it seems like you should be able to prevent getting "out-of-sync" with static analysis. Like requiring both branches of an "if" statement have the same stack effect. Am I missing something? By Greg Buchholz at Wed, 2005-08-10 21:05 | login or register to post comments

Static analysis
The Factor language apparently includes a stack effect inference tool which performs such analysis. The documentation mentions combining unit testing with stack effect inferencing to check the stack effect of procedures (words). This seems like a lot of trouble to go to to achieve a kind of safety that other languages get for free. I'm not saying that this issue alone makes concatenative languages unfit for human consumption. However, I think the negative reaction to the tractability of these languages that's been expressed by some posters in this thread is not an uncommon one, and I've attempted to identify a couple of the semantic issues which underly this reaction. In this particular case, I'm saying that even if you get past the lack of procedure-local name/value bindings, that the approach these languages take to local value abstraction is less safe and requires more care than languages with name-based local abstraction. By Anton van Straaten at Wed, 2005-08-10 22:35 | login or register to post comments

empty stacks
Likewise, executing pop on an empty stack likewise produces

Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated.

There are no problems at all about empty stacks. Factor solves the stack underflow problem by mprotecting (or something like that) the memory directly underneath the

stack, triggering a segfault if there's underflow. Should this happen, the segfault can be caught like any other exception, and it the toplevel of the repl, it's caught automatically and printed out, also like any other exception. All this stuff about is useless outside of research. An infinite loop has a very different outcome than stack underflow. By Daniel Ehrenberg at Thu, 2005-08-11 17:50 | login or register to post comments

Not just low-level and unsafe, but complex


The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Actually I think the more important point is why you should have to use six of these (pop, swap, dup, rollup, map, ternary) in such a simple example. By dhopwood at Thu, 2005-08-11 00:48 | login or register to post comments

The original thread


As long as we're alread discussing it, we might as well mention the original thread that spawned the quadratic formula article. Also note a previous discussion on point-free programming on LtU. And some might be interested in Pointless Haskell. By Greg Buchholz at Thu, 2005-08-11 01:45 | login or register to post comments

Effects&type system for Scheme? :-)


I guess you meant that though it's absolutely possible to have a PL in which the value of (let ((x 33)) (foo) x) is not 33 (e.g., because of effects - exceptions, nontermination, continuations, etc.), such a PL is less safe than reasonable? Here is another example of non-local bugs: a badly-formed HTML post in a naive weblog system can break the whole page. By Andris Birkmanis at Thu, 2005-08-11 17:03 | login or register to post comments

No special machinery required


I guess you meant that though it's absolutely possible to have a PL in which the value of (let ((x 33)) (foo) x) is not 33 (e.g., because of effects - exceptions, nontermination, continuations, etc.), such a PL is less safe than reasonable? No, I meant that assuming normal completion of foo, one can rely on a value of 33 being returned from that expression. If foo has an exception, that's fine, and presumably happens for a reason. The case I'm pointing to as undesirable is if the expression ends up returning 42 or the string "gotcha" instead, i.e. the program produces an incorrect result.

Here is another example of non-local bugs: a badly-formed HTML post in a naive weblog system can break the whole page. Yes. No-one in their right mind would defend such a system. ;) By Anton van Straaten at Thu, 2005-08-11 17:17 | login or register to post comments

there's nothing to stop you (


there's nothing to stop you (or rather, a language implementer) from labelling items on the stack. in otuto you'd have something like
{ quadratic | a b c } => { roots ~ b sqrt - * 4 * a c * b b * 2 a } where { roots | x y z } => { / + x y z / - x y z }

which seems clear enough, apart from the RPN. disclaimer - i can't remember the exact details of otuto v1, and v2 is stalled until i gain enough patience to return to the uml tool (i'm trying to implement it in c++ for various (vaguely financial) reasons). By andrew cooke at Wed, 2005-08-10 16:50 | login or register to post comments

you can't. scopes are not concatenative


Once you add names then the code is no longer concatenative. For example if you were to add names for items on the stack as an extension of the Joy language, then each quotation would need a pointer to its lexical scope. You can no longer compose functions by concatenating lists of words because the two quotations may come from different lexical environments. [edit: otuto is very interesting, by the way. I hadn't realized what language your example was from.] By James McCartney at Wed, 2005-08-10 19:08 | login or register to post comments

Scopes are concatenative


If quotations were first-class types in Joy, rather than lists of words, they could hold a pointer to their lexical context. Appending two quotations could be implemented with a 'cord' data structure, that simply holds references to both quotations, and when evaluated, iterates over the first one, followed by the second one, in each case using the correct lexical context. Concatenative languages (and Lisp, where lambdas are firstclass too) are not all about cons cells and lists. By slava at Wed, 2005-08-10 20:01 | login or register to post comments

concrete example

OK perhaps you can have hidden lexical scope pointers and use a cord. But this seems to create a pretty confusing language. For a simple example suppose we have a Joy-like language plus an operator 'let' that takes a value, a quoted symbol, and another quotation which is evaluated in an environment where that symbol is bound to the value.
2 [x] [x 1 +] let

so that the above returns 3. now:


1 [x] [[x]] let [x x +] 2 [x] [[x +]] let concat

this returns the concatenated function: however each x is bound to a different value.
[x x +] i

returns 3 By James McCartney at Thu, 2005-08-11 19:13 | login or register to post comments

there is no lexical scope dur


there is no lexical scope during evaluation. that syntax is a very simple "template" for rewriting the stack (actually, stacks). so you'd define dup as:
{ dup | x } => { | x x }

there's some syntactic sugar involving implicit quoting that means you can write that as
{ dup | x } => { x x }

but that's beside the point. all that the otuto definitions describe is how you rearrange the stack(s). you could implement them by cross-compiling to an intermediate representation without variable names (using swap, roll etc etc) (i believe - i've never bothered to prove it, but i would be amazed if it were not true). i'm not 100% convinced i understand what concatenative is, so i can't say otuto is concatenative, but i'm pretty sure that those variable names are just sugar. a long time back, when i mentioned this on the joy email list, someone added something similar to another language there. By andrew cooke at Fri, 2005-08-12 22:01 | login or register to post comments

otuto
As I understand it, if a language is concatenative then functions are sequences of operators where you can take any function and split it at any point and you have two valid functions, or you can join two functions and have a valid function. I'm not sure what it would mean to split or join functions in otuto, since they are rules with 4 separate sections, not simple sequences of operations. By James McCartney at Sat, 2005-08-13 16:32 | login or register to post comments

well, the rules are restricte

well, the rules are restricted - they always match a single value on the left hand stack. so the top value on that stack can be interpreted as the current instruction. if you then take the right hand stack as "values" then you can think of a rule as saying how to apply the intruction (top of the left stack) to the values. ok, now if you continue with that, a definition like:
{ foo | a } => { bar baz | a a }

is equivalent to defining foo as bar baz dup in a concatenative language. now if you accept, as i argued above, that the variables are just syntactic sugar, then that implies, i think, that there is a simple mapping from otuto to concatenative languages. (this breaks down with meta-programming, though, since you can change the assumptions made above). By andrew cooke at Sun, 2005-08-14 01:43 | login or register to post comments

concatenative is ..
A language in which the concatenation of programs denote the composition of functions has been called a concatenative language. I first heard this terminology from Billy Tanksley. -- Manfred von Thun otuto is concatenative. so is XY. Here's one version of quadratic in that language, which uses names to similar effect (that is, to map items from the stack to multiple locations in the body of the code):
; quadratic { [a b c] b -: a c * 4 * b 2 ^ -. .5 ^ [1 -1] * + a 2 * % } ;

regarding your proposal for cross-compilation, i think it would be sufficient to compile programs like quadratic, in which names are used to map items from the stack to multiple locations, to programs which consist of an initial reordering of items on the stack, followed by code which is name-free. Billy Tanksley has suggested a notation (supported in XY) in which an initial "shufflepattern" is followed by name-free code. For example, compile
{ [a b c] a b * a c * + }

to
abc--abac * [*] dip +

[edited to show an example of compilation from pattern-notation to shuffle-notation] By sa at Sat, 2005-08-13 23:37 | login or register to post comments

Monads as concatenative PLs?


If we squint our eyes enough to see bind as concatenation, then how are monadic expressions not concatenative? Say, in imperative PLs one can concatenate statements (usually using ';'), and quite often "imperative" means working in some compound monad. Well, I might understand "composition of functions" a bit wrongly... By Andris Birkmanis at Mon, 2005-08-15 06:03 | login or register to post comments

Squinting
I don't think squinting is necessary at all. In fact, lately I've been thinking that there might be a benefit for newbies by writing a monad tutorial using a language like Joy. Subroutines (quotations) in Joy are lists of functions and you interpret them by folding in a 'compose' and applying to a stack...
fold compose id [1 4 dup * +]

Replace 'compose' with 'bind' for your favorite monad, and I think you might be on your way to removing much of the mystery behing monads (or at least you've given them an opportunity to see monads that aren't tied up in Haskell syntax and type-classes). By Greg Buchholz at Mon, 2005-08-15 14:25 | login or register to post comments

They recently made much more


They recently made much more sense to me when I combined the two pieces of information that monads are (1) an algebraic structure that (2) forces a particular evaluation order. None of the tutorials on the web I have read, even I think some Simon Peyton-Jones stuff really mention both of those facts, then illustrate with reference to examples like lists with mapping, and factorial, which are familiar functions that, ignoring associativity information, in their common recursive definitions have a defined evaluation order. By Marcin Tustin at Mon, 2005-08-15 22:57 | login or register to post comments

Looks applicative to me
I don't know whether otutu as a whole is applicative or a hybrid, but the above code is definitely applicative. (RPN vs prefix vs infix is a separate, and much less significant, issue.) By dhopwood at Thu, 2005-08-11 01:48 | login or register to post comments

The code is not applicative,


The code is not applicative, it's just reverse RPN (just polish notation) and it uses named variables. There is still a stack.

By Daniel Ehrenberg at Thu, 2005-08-11 16:14 | login or register to post comments

Re: A comment about concatenative languages in general


I consider concatenative languages suitable for high-level programming, in fact I am the lead developer of the Factor programming language. As always, the burden of bridging the gap between the programmer and the language lies squarely on the development tools at one's disposal, and the simplicity of concatenative languages allows one to develop such tools more easily than with other languages. A meta-circular interpreter is a page of code in Factor, and it is not an academic exercise; it is used to implement a single-stepper. I'd like to see a single-stepper in another language that is equally clear. There is a static stack effect inferencer tool, also used by the compiler. The compiler takes maximal advantage of the simple semantics to implement the concatenative equivalent of "lambda lifting", and various other optimizations such as partial evaluation. I'm not sure what you mean by "How can you build reusable encapsulated abstractions?". If you look at Factor's library, you will see a very high degree of abstraction and code reuse. There is no "boilerplate" code. Re: "writing a Turing machine program"; after working with Factor's libraries for sequences and mathematics, going back to a language like Java with annoyingly handcoded boilerplate loops seems more like a Turing machine. Your emphasis on the word "seem" suggests that you don't believe that concatenative languages are used to develop non-trivial applications. Well, notwithstanding the clear fact that there is enough commercial demand to keep several Forth vendors in business, Factor has an HTTP server with a continuation-based web framework, a GUI toolkit-inprogress, various development tools, and more. This is a richer level of functionality than offered by most languages discussed on this web site, so ironically, one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. By slava at Wed, 2005-08-10 06:30 | login or register to post comments

Re: A comment about concatenative languages in general


... the simplicity of concatenative languages allows one to develop such tools more easily than with other languages. While it's true that there are few complex concatenative languages, there are plenty of equally simple applicative ones. I'm not particularly interested in comparing, say, Factor with Java; I'm interested in the effects of concatenative vs. applicative semantics for otherwise comparable languages.

[...] an HTTP server with a continuation-based web framework, a GUI toolkit-inprogress, various development tools, and more [...] is a richer level of functionality than offered by most languages discussed on this web site Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. ... one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. This is just trolling. By dhopwood at Wed, 2005-08-10 14:31 | login or register to post comments

Irrelevant
Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. "Research language" is a euphemism for "useless language". I'm really only interested in practical concerns. I write code, and I like to write stable code quickly, and I find that concatenative languages let me do this just as easily as applicative ones, with the additional advatange that writing tools is easier. You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. By slava at Wed, 2005-08-10 16:08 | login or register to post comments

For me...
You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. Perhaps. However, the next person who has to deal with your code may indeed have a problem. I could truthfully say "for me, dealing with unrestricted pointers in C is not a problem", but the broader consequences of everyone holding that opinion are not pretty. By Anton van Straaten at Wed, 2005-08-10 16:13 | login or register to post comments

Research is a Good Thing


"Research language" is a euphemism for "useless language". And where do you think the ideas for features in 'mainstream' languages come from?

By dhopwood at Thu, 2005-08-11 02:07 | login or register to post comments

things Factor can do


While it's true that there are few complex concatenative languages, there are plenty of equally simple applicative ones. I'm not particularly interested in comparing, say, Factor with Java; I'm interested in the effects of concatenative vs. applicative semantics for otherwise comparable languages. The point is, you can't do a lot of the stuff Factor does in an applicative language. I can't imagine Factor being applicative. The closest parallel I can find is Common Lisp, but there are many things about Factor that make it much better. One example that comes to mind is the word watch. The implementation isn't more than 3 or 4 lines, and what it does is it prints out the name of the word and the contents of the datastack when the word is called. It's been really useful for me and I've been able to get rid of all the debugging prints in my code. One time, I needed to see what was on the stack when leaving the word, so I made a new watch-after thing. It also only took 4 lines and not much thought. In Common Lisp, this would probably take the form of a new macro for when defining a function or macro, but that technique is less flexible. In Haskell or ML, making a debugging tool like this is even harder, involving complicated templating libraries.

Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. I, unlike Slava, am OK with research languages. They're good for research. As lothed as Slava is to admit it, Factor isn't really suitable for most applications yet. Of course, it is intended for use, but Factor's libraries aren't mature enough, especially crucial ones like the GUI library. Factor has a pretty good FFI, but that's not because it's concatenative. We really shouldn't discuss this, though, you're right.

This is just trolling. [about applicative languages being nonproductive] Yeah By Daniel Ehrenberg at Wed, 2005-08-10 19:38 | login or register to post comments

Fun with stacks


One example that comes to mind is the word watch. The implementation isn't more than 3 or 4 lines, and what it does is it prints out the name of the word and the contents of the datastack when the word is called. Obviously, anything which involves dealing directly with the stack will be easiest in a language which gives you direct access to the stack. The question is how many things fall into this category, and whether they're worth the consequences of allowing direct

manipulation of the stack, and in what situations. For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages. BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, i.e. both the entry and exit from a traced function. By Anton van Straaten at Wed, 2005-08-10 22:48 | login or register to post comments

Re: Fun with stacks


The question is how many things fall into this category, and whether they're worth the consequences of allowing direct manipulation of the stack, and in what situations. That's actually a good point, but let me turn that question around: how many things fall into the category where you need lexically named variables to do certain things? I used to think many, but you can really do everything without them (but for a few circumstances, dynamically scoped variables help). The concatenative paradigm is not just a way to manipulate the stack directly, it's a whole different paradigm of programming where some things are more elegant and others less. After a while, it tends to feel like most things are more elegant, but from your perspective, everything will look awkward at first. For an example of something which comes out really great, here's the heart of the Factor compiler:
M: compound (compile) compiling dataflow optimize linearize simplify generate ;

Yes, I know that's a terrible argument, but I don't care. It's a reason I use Factor anyway. At this point, rather than feeling that concatenative languages are backwards, I feel like they're in the exact right order. They do reflect the order of evaluation, however important or unimportant that is.

For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages. Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative.

BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, i.e. both the entry and exit from a traced function. Not only does it take more code to do such a simple task but you have to alter the

original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ word-name watch). One thing about Factor that's great but unrelated to its concatenativity is that you can mutate stuff like word bodies, though you don't normally do this outside of debugging. There's probably an applicative language that does that. By Daniel Ehrenberg at Thu, 2005-08-11 02:24 | login or register to post comments

Fun with folds


how many things fall into the category where you need lexically named variables to do certain things? "Need"? None. However, the number of situations in which having lexically named variables is useful is high, IME. I used to think many, but you can really do everything without them Similarly, that you "can" do everything without them is not what I'm concerned with. Turing completeness tells me it's possible, the more important questions are how practical it is, how easy it is to deal with code written by other people, etc. The concatenative paradigm is not just a way to manipulate the stack directly, it's a whole different paradigm of programming where some things are more elegant and others less. After a while, it tends to feel like most things are more elegant, but from your perspective, everything will look awkward at first. I implemented and maintained Forth programs for a couple of years. It was fun, but part of the reason I used Forth was because it was in a small, embedded-like control environment, where the alternatives were languages like C and assembler. The points I've raised are not about things being backwards or seeming awkward. For an example of something which comes out really great, here's the heart of the Factor compiler: M: compound (compile) compiling dataflow optimize linearize simplify generate ; I agree, it's great when something works out in a way that seems intuitive. However, my experience is that different things can seem intuitive for different problems. If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, e.g. in Scheme:
(concatenate input compiling dataflow optimize linearize simplify generate)

Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. It'd be interesting to see a Factor (or Kont?) version of that program, for comparison. Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative. Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities.

Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ word-name watch). The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. By Anton van Straaten at Thu, 2005-08-11 04:04 | login or register to post comments

Concatenate
If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, e.g. in Scheme:
(concatenate input compiling dataflow optimize linearize simplify generate)

How is this any different from embedding an infix language in Factor? The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. But the implementation of Common Lisp's trace is more complicated than the Factor equivalent. By slava at Thu, 2005-08-11 04:45 | login or register to post comments

I don't care enought to make a good title


"Need"? None. However, the number of situations in which having lexically named variables is useful is high, IME. Please provide me with a situation where the solution with variables is much better. I know one, math, but that situation seems to be isolated, so I made a minilanguage (which I'm currently redoing) to use infix syntax for math. Do you think this is a more general problem than that? I'm seriously interested in this.

(concatenate input compiling dataflow optimize linearize simplify generate) I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp...

Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. But then there are properties of Kont and Factor (but not Forth) that lend

it to more abstraction, like the use of combinators to replace macros in languages like Lisp and built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation, and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though.

The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. Really? Oh, ok, then never mind that point. I don't have much experience with Lisp development tools. By Daniel Ehrenberg at Thu, 2005-08-11 16:07 | login or register to post comments

This is not a title


Please provide me with a situation where the solution with variables is much better. I know one, math, but that situation seems to be isolated, so I made a minilanguage (which I'm currently redoing) to use infix syntax for math. Do you think this is a more general problem than that? I'm seriously interested in this. Yes, it's a much more general problem. See my other longer response, entitled "reference sense pop". I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp... Uh-oh, you're showing signs of being infected by stack-oriented thinking, which is one of the problems that can be caused by using a language which forces you to think in terms of its implementation details all the time. It doesn't require "each of those functions take a stack and return a stack" for a start, procedures in most applicative languages can receive multiple values using ordinary named parameters, and multiple values can typically be returned in various ways, such as lists, tuples, or other kinds of collection. A Scheme version which uses lists applied to ordinary procedures with named arguments, is as simple as this:
(define (concatenate input-args . procs) (fold apply input-args procs))

Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. Named variables are certainly not irrelevant to abstraction. Again, see my other reply. But then there are properties of Kont and Factor (but not Forth) that lend it to more abstraction, like the use of combinators to replace macros in languages like Lisp and

built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation, and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though. I agree, those kind of high level features (borrowed from functional programming! ;) are useful, and certainly help. However, they work just as well in the presence of lexical names. (There's a difference in the way inlining tends to work, but the end result is similar.) By Anton van Straaten at Fri, 2005-08-12 04:55 | login or register to post comments

Infected
Uh-oh, you're showing signs of being infected by stack-oriented thinking, What a great way to refute somebody's point! This is just like putting your hands over your ears, and saying "neener, neener, I'm not listening". Perhaps you've been infected by Scheme-oriented thinking? By slava at Fri, 2005-08-12 18:15 | login or register to post comments

Don't stop reading yet...


The sentence after the one you quoted contains the refutation. Regarding the "infected" comment, I hope LittleDan didn't take that too personally, it was certainly intended lightheartedly. I should have put a smiley after it. However, it's also a serious point. When you find yourself saying things like "The only way to make it work would be to have each of those functions take a stack and return a stack", talking about such a specific structure as a stack when the problem is in fact a much more general one, that's a clear sign that you're thinking too much in terms of implementation and too little in terms of specification. It's an easy mistake to make, and I suspect we all do it to some degree from time to time. However, there's an additional risk in using a language which forces you to think in terms of a single datastructure everywhere, which is that you start conceiving of all problems in terms of that datastructure. I'd be especially concerned about exposing inexperienced programmers to languages that require this kind of thinking, because you'd be ingraining bad habits in them from the very beginning. By Anton van Straaten at Fri, 2005-08-12 19:02 | login or register to post comments

The Swine Before Perl


Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. It'd be interesting to see a Factor (or Kont?) version of that program, for comparison.

I started to implement that Swine Before Perl program, but after doing the non-macro part, which came out a bit cleaner than the Scheme program IMHO, I really felt like this is enough. What benefit do you derive from making that a macro? In Factor, syntax extension isn't as common as in Scheme because it's not needed as much, and in Kont, there is no syntax extension. If you're making that macro for a speed boost, that's just showing that you have a bad implementation of the language. I still made a seperate syntax for defining the state machine, though. The code is below.
IN: state-machine USING: kernel hashtables sequences lists ; SYMBOL: init SYMBOL: loop SYMBOL: end : c[ad]+r #! Initial datastructure {{ [[ init {{ [[ CHAR: c loop ]] }} ]] [[ loop {{ [[ CHAR: a loop ]] [[ CHAR: d loop ]] [[ CHAR: r end ]] }} ]] [[ end {{ [[ CHAR: r end ]] }} ]] }} ; : walk-states ( seq states -- ? ) init swap rot [ >r ?hash r> swap ?hash ] each-with end = ; : {| POSTPONE: {{ : :: POSTPONE: {| : -> ; parsing : || POSTPONE: ]] : ;; POSTPONE: ]] : |} POSTPONE: ]] swons ; parsing POSTPONE: [[ ; parsing ; parsing POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: }} \ walk-states

: final-machine {| init :: CHAR: loop :: CHAR: CHAR: CHAR: end :: CHAR:

c a d r r

-> -> -> -> ->

loop loop loop end end

;; || || ;; |} ;

Is there something wrong with that code? It seems to be a bit shorter than the Scheme code. Edit: as Slava correctly pointed out on the #concatenative channel, those cosmetic macros are completely stupid and the first version is really the one that should preferably be used. By Daniel Ehrenberg at Thu, 2005-08-11 19:43 | login or register to post comments

btw
here's how to activate it

"cadar" final-machine

or
"cadar" c[ad]+r walk-states

By Daniel Ehrenberg at Thu, 2005-08-11 20:18 | login or register to post comments

too much code


dan, that's way too much code. so is krishnamurthi's scheme implementation. a state machine is a matrix. rows are states and columns are inputs. in this case, the machine has four rows corresponding to the four states: error, init, more, end; and 256 columns, corresponding to possible ascii inputs. if the machine is in state i and sees input j, then it should enter state k. using two dimensional scalar indexing, k = machine[i;j]. the algorithm should consume inputs and index out the next state, leaving the sequence of states visited as its result (a vector of indices). repeated indexing in k is implemented as:
result:index matrix\indices

i.e.
k0:matrix[index;indices 0] k1:matrix[k0;indices 1] k2:matrix[k1;indices 2] :

in k, the algorithm is realized as:


s:@'[4 256#0;_ic("";"c";"adr";"r");:;(!0;2;2 2 3;3)] m:1 s\_ic

e.g.
(1 1 1 1 1 1 m'("car";"cdr";"cadr";"cddr";"cdar";"caddr") 2 2 3 2 2 3 2 2 2 3 2 2 2 3 2 2 2 3 2 2 2 2 3)

a 0 in the return vector indicates that m has entered the error state. on my pentium 4, 3 ghz machine timings are:
10k elements: 0 ms 100k elements: 15 ms

1m elements: 187 ms

state-machine is just a special case of index-scan, which in turn is a special case of scanning a function on its input. if you understand +, and you understand summation as repeated application of +, and you understand indexing, then it's a small step to knowing how to code up a state-machine. By sa at Fri, 2005-08-12 15:52 | login or register to post comments

a state machine is a matrix?


a state machine is a matrix? By Michael Walter at Fri, 2005-08-12 18:19 | login or register to post comments

FSMs are graphs, which have m


FSMs are graphs, which have matrix representations. Sloppy language, but hey. By Philippa Cowderoy at Sat, 2005-08-13 11:29 | login or register to post comments

but what are they really?


hey indeed. my point was that recursive 2d indexing captures the behavior of fsms, no matter what they "really" are, if, indeed, they are anything at all beyond how they are represented in different programming languages. no need to follow that long and winding road into the metaphysics of abstract objects. obligatory reading: paul benacerraf, "what numbers could not be." By sa at Sat, 2005-08-13 22:30 | login or register to post comments

Repent, sinners
#include <stdio.h> int main(int argc,char**argv) { char S[4][256]={{0},{['c']=2},{['a']=2,['d']=2,['r']=3},{['r']=3}},s=1; if(argc

Simplicity > Abstraction.

By Mackenzie Straight at Fri, 2005-08-12 17:16 | login or register to post comments

Understandable? Flexible?

That's great guys, unfortunately, it seems to miss the point fairly completely. Recall, Anton's pointer to the talk said (emphasis mine): Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. Neither the C, K, nor Factor code is quite as understandable as that presented in slide 38 of Shriram's talk. What's more (and probably more important), is that neither is as flexible as what he presents either. What he's shown[1] is a simple syntax extension for generating (fairly efficient) automata from these simple, clear descriptions via macros. Given the spec for another automata, your solutions would have you hard-coding your new state tables while Shriram's would have you quickly done with the code and onto some other problem. [1] At least that's what I believe he's shown... I'm only working with the slides, and I know just enough Scheme to get myself in trouble -- feel free to taunt me if I'm wrong. One of these days I'll have to get over the Lisp allergy I picked up when I was a wee lad and really give Scheme a good look. -30By William D. Neumann at Fri, 2005-08-12 17:50 | login or register to post comments

Flexible?
The Factor code that Dan showed is equivalent to the Scheme code. I'd like you to explain why you think that this:
(init : (c (more : (a (d (r (end : (r -> -> -> -> -> more)) more) more) end)) end))

Any more flexible than this:


: c[ad]+r {{ [[ init {{ [[ CHAR: [[ loop {{ [[ CHAR: [[ CHAR: [[ CHAR: [[ end {{ [[ CHAR: c a d r r loop loop loop end end ]] }} ]] ]] ]] ]] }} ]] ]] }} ]] }} ;

The Factor version has a bit more punctuation, because Dan did not extend the syntax; he just used Factor's hashtable literals. Also he uses character literals instead of symbols. And for what its worth, I consider the omission of hashtables from the Scheme standard to be an unacceptable limitation. I don't like the way typical Scheme code is written, using lists (including alists) and symbols for everything. It might look pretty but it just doesn't scale as well as other data structures for most tasks.

By slava at Fri, 2005-08-12 18:00 | login or register to post comments

Faster than I am...


I was just about to leave a note that the factor version may be just as flexible, because I am completely unfamiliar with it, and I couldn't figure out exactly what was going on. I more or less figured out what he was doing after I posted the message. I stand corrected there... -30By William D. Neumann at Fri, 2005-08-12 18:08 | login or register to post comments

More confusion?
Unless I'm confused (which happens on a regular basis), I believe you're wrong. The Factor equivalent of this (from pg. 38)...
(automaton init (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end)))

...looks like...
: final-machine {| init :: CHAR: loop :: CHAR: CHAR: CHAR: end :: CHAR: c a d r r -> -> -> -> -> loop loop loop end end ;; || || ;; |} ;

...not a lot of difference to my eyes. By Greg Buchholz at Fri, 2005-08-12 18:00 | login or register to post comments

understandable. flexible.
i don't think i missed either point. for a machine with n states, use an n x 256 matrix m. for each (state : input -> nextstate), assign
m[state;input]:nextstate

to run on input i:
r:1 m\i

setting up a particular machine can be done line-by-line, as in the scheme example, or it can be done all at once, as my example shows. no macros, no special syntax. if state-machines are matrices, and recognition is recursive two-dimensional indexing, then implement it as such, as directly as possible. By sa at Fri, 2005-08-12 18:55 | login or register to post comments

Simplicity
This particular task does not even call for a state machine at all:
: swine ( sequence -- ? ) "c" ?head >r "r" ?tail r> and [ "ad" contained? ] [ drop f ] ifte ;

By slava at Fri, 2005-08-12 17:52 | login or register to post comments

... one could even conclude t


... one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. This is just trolling. It's hyperbole to show up the fact that there are different expressivity/productivity/comprehensibility wins with concatenative and applicative languages. Let's not start pointing fingers and crying "troll" at everyone using a modicum of rhetoric, even if this is a phenomenon I might have contributed to. However, I suspect that the difficulty of using a form of functional programming without named parameters is a significant problem for many people. I certainly find it difficult enough, because it taxes my memory, despite my ability to remember to API docs, that I have been discouraged from exploring concatenative programming. It might be interesting to know if there are any patterns that can be used to fake named parameters while maintaining concatenability. By Marcin Tustin at Thu, 2005-08-11 14:26 | login or register to post comments

Let's not start pointing fing


Let's not start pointing fingers and crying "troll" at everyone using a modicum of rhetoric, even if this is a phenomenon I might have contributed to. Making oneself look completely ignorant is rhetoric? By Derek Elkins at Fri, 2005-08-12 17:53 | login or register to post comments

As O said, I interpreted his

As O said, I interpreted his comment that "one might conclude..." as hyperbole. I wasn't commenting on his other posts. By Marcin Tustin at Sat, 2005-08-13 08:50 | login or register to post comments

Wow, an improvement on Java boilerplate?


Factor has an HTTP server with a continuation-based web framework, a GUI toolkit-inprogress, various development tools, and more. This is a richer level of functionality than offered by most languages discussed on this web site, so ironically, one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. If you expect to be taken seriously, you might want to tone down the rhetoric. I don't know what you're thinking of here, but this sounds completely false. Perhaps by "most languages discussed on this web site", you're thinking only of announcements of brandnew or toy languages? Re: "writing a Turing machine program"; after working with Factor's libraries for sequences and mathematics, going back to a language like Java with annoyingly handcoded boilerplate loops seems more like a Turing machine. What I take from this is that you find Java so restrictive, that you're willing to put up with other serious limitations in concatenative languages to escape it. What about all the much more expressive applicative languages, though? Scheme, Haskell, ML? Should we take your comparison to Java as a tacit acknowledgement that concatenative languages can't really compete with more expressive applicative languages, except perhaps in certain specialized domains? By Anton van Straaten at Wed, 2005-08-10 16:08 | login or register to post comments

What serious limitations?


I'd say Haskell and ML's lack of a live environment with in-image development tools is a serious limitation. I've tried Scheme and far prefer Common Lisp. I have never developed the same application in CL and Factor side by side, except for porting a few trivial things back and forth, so I cannot comment on the merits of CL versus concatenative languages, although I can say Factor's sequences library is cleaner and more generic than CL's. By slava at Wed, 2005-08-10 16:18 | login or register to post comments

Repeat after me...


I'd say Haskell and ML's lack of a live environment with in-image development tools is a serious limitation.

language implementation != language language != language implementation By dhopwood at Thu, 2005-08-11 02:54 | login or register to post comments

Toplevel
I was just going to say that I quite enjoy O'Caml's toplevel(s). :-) By Paul Snively at Thu, 2005-08-11 03:16 | login or register to post comments

Slipping?
Paul, I'm disappointed. You forgot to mention Tuareg mode. ;oP Of course, GHC and SML/NJ also have nice toplevels. I wouldn't be surprised to find IDEs for them, although I've never looked. By Anton van Straaten at Thu, 2005-08-11 03:49 | login or register to post comments

Type systems
language implementation != language Yeah, but Haskell's and ML's type systems make it so that less things can be done at runtime, so development tools are more complicated and less useful. PLEEEASE don't get into a type system debate here, I just wanted to say that typing makes things more complicated. By Daniel Ehrenberg at Thu, 2005-08-11 16:13 | login or register to post comments

hs-plugins certainly makes an


hs-plugins certainly makes an interesting tool in that context By Philippa Cowderoy at Thu, 2005-08-11 16:41 | login or register to post comments

Not so much due to types


I'd say it's only secondarily due to their type systems. A more fundamental factor is their higher degree of purity compared to Lisp, Smalltalk and the likes, which makes "interactive update" a more alien concept. By Andreas Rossberg at Sat, 2005-08-13 12:09 | login or register to post comments

Foreign?

I can't imagine writing any nontrivial program in a concatenative language; it seems like a tour de force comparable to writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices. Borrowing an example from another thread...
class.method().attribute.method( obj.attribute, obj.method() ).getSomething();

...Is it just me, or does that bear a striking resemblance to concatenative code? By Greg Buchholz at Wed, 2005-08-10 17:34 | login or register to post comments

names
...Is it just me, or does that bear a striking resemblance to concatenative code? expressions are easy enough. Defining functions without having argument names or let bindings to name your terms gets you into drop swap dup rot shenanigans... That kind of stuff is for compilers, not programmers. Up with that, I will not put. By James McCartney at Wed, 2005-08-10 18:54 | login or register to post comments

Re: names
So why do you troll in this thread? By slava at Wed, 2005-08-10 19:05 | login or register to post comments

Stack scramblers
I like William Tanksley's stack scramblers idea for concatenative languages. They look like this:
swap = ab-ba drop = adup = a-aa

I can never remember which of rollup or rolldown does which of abc-cba or abc-bca. Stack scramblers make this perfectly clear. Factor uses stack scramblers nowadays also.

--Shae Erisson - ScannedInAvian.com By shapr at Thu, 2005-08-11 11:37 | login or register to post comments

Umm... no
Factor uses stack scramblers nowadays also. It does? I'm pretty sure it still uses good old dup, drop, swap, etcetera. I could never remember which ones those stack shufflers were -- until I started actually programming. Then, it was crystal clear. By Daniel Ehrenberg at Thu, 2005-08-11 16:21 | login or register to post comments

The original Java Factor impl


The original Java Factor implementation had these stack shuffle words. The current C version no longer has them IIRC. By doublec at Thu, 2005-08-11 22:14 | login or register to post comments

Re: Foreign?
Compare some typical OOP code in four languages:
Common Lisp: (baz (bar (make-instance 'foo))) Java: new Foo().bar.baz(); Smalltalk: Foo new bar baz. Factor: <foo> bar baz

Factor's concatenative code looks very similar to how Smalltalk unary messages are composed. Note that CL code has to be read right to left, which ironically is a common complaint against *postfix* languages. By slava at Wed, 2005-08-10 19:02 | login or register to post comments

"kont"
means "ass" in dutch. By Wouter at Wed, 2005-08-10 21:05 | login or register to post comments

As in "arse" or "donkey"?
As in "arse" or "donkey"? By Marcin Tustin at Thu, 2005-08-11 14:15 | login or register to post comments

Arse
The dutch word "kont" is etymologically related to the english word "cunt". The original meaning of both words is "hole". (Source: The Alternative Dutch Dictionary) By Sjoerd Visscher at Thu, 2005-08-11 15:30 | login or register to post comments

TMI: Too Much Information


Though cuss words might be considered a valid form of study in the field of linguistics, their use in programming languages is rather unchartered. :-) By Chris Rathman at Thu, 2005-08-11 15:44 | login or register to post comments

Well...
we've had Brainfuck for some number of years now. Not to mention Linda, named after a porn star. I'm sure there are a few other (as yet undiscovered) dirty jokes and double entendres out there.... By Scott Johnson at Thu, 2005-08-11 21:45 | login or register to post comments

koders.com could be a start


Crap is the most popular cuss word, of the words I can think of right now. More than half of the occurences is in C code, but I don't know how much of the code in the database is in C. By Sjoerd Visscher at Fri, 2005-08-12 08:39 | login or register to post comments

Time to get started then...


Perhaps we should get started on the research then... Just think of all the expressivity we could gain. Although as George Carlin has pointed out in one of his routines, at least one of the more popular words is already seriously overloaded... -30By William D. Neumann at Fri, 2005-08-12 14:19 | login or register to post comments

Stack-based is point-free?
If you represent a stack as nested pairs, you can treat Forth words as unary functions of the form stack new_stack. Integer constants would have the type a. a a Int, and the arithmetic functions like +, -, etc., would be typed a. a Int Int a Int. With a type system, you can appeal to polymorphism to guarantee that functions won't interfere with elements beyond a certain depth in the stack. It's pretty easy to model this in a functional language (note: this is not a claim of superiority/inferiority). Here's a GHCi session:
Prelude> let k = flip (,) Prelude> let lift2 f ((s,a),b) = (s, f a b)

Prelude> let plus = lift2 (+) Prelude> :t plus plus :: ((a, Integer), Integer) -> (a, Integer) Prelude> :t plus . plus plus . plus :: (((a, Integer), Integer), Integer) -> (a, Integer) Prelude> :t plus . k 4 plus . k 4 :: (a, Integer) -> (a, Integer) Prelude> :t plus . k 4 . k 2 plus . k 4 . k 2 :: a -> (a, Integer) Prelude> snd . plus . k 4 . k 2 $ () 6

What's interesting is that arrows in Haskell are defined in a point-free style, along with a special syntax involving local names which is translated into the point-free style. Thus,
proc x -> y <- f -< x z <- g -< x returnA -< y + z

becomes
arr (\ x -> (x, x)) >>> first f >>> arr (\ (y, x) -> (x, y)) >>> first g >>> arr (\ (z, y) -> y + z))

which consists primarily of composition and "stack" manipulation. (Arrows pass multiple values around with nested tuples, but they don't prefer one side or the other, so can be more tree-like than stack-like.) By Dave Menendez at Thu, 2005-08-11 23:55 | login or register to post comments

Joy in Haskell
Things get a little interesting when you get to the recursion combinators, but with a dash of type-system wizardary, you can write most any Joy combinator in Haskell. By Greg Buchholz at Fri, 2005-08-12 01:24 | login or register to post comments

The lambda calculus is the "i


The lambda calculus is the "internal language" of a cartesian closed category. This means that every type in the lambda calculus corresponds to an object of a CCC, and that every term corresponds to a morphism. In turn, this means that you can convert any program in the lambda calculus into an equivalent program consisting entirely of a composition of combinators (and you can do it in O(1) in either direction). If you like, you can think of the types of the set of free variables of a term as the type of the "input stack", and the return type as the type of the "output stack", which means that there is no fundamental expressivity gain to be had in shifting between a combinatory

language and the lambda calculus. I think that being able to name variables explicitly is a genuine advantage, but obviously, mileage varies. By neelk at Fri, 2005-08-12 19:32 | login or register to post comments
-------------------------------------------------------------------------------

(TRADUCCIN) Lambda

de la ltima

Inicio foros Foro LTU

Concatenativa Kont Idioma


Un interesante proyecto . Kont sali de un intento de disear un lenguaje de programacin en el que se expres en cada programa estilo continuacin de paso. Un lenguaje de concatenacin es un lenguaje en el que los programas estn formados por la concatenacin de pequeas primitivas, ala Forth.
Por Chris en 08/08/2005 16:17 | LTU foro | tema anterior del foro | tema siguiente foro | otros blogs | 13663 lecturas

Opciones de visualizacin de comentarios


Roscado lista - se expandi 200 comentarios por pgina Fecha - antiguos primero
Guardar la configuracin

Seleccione la forma que prefiera para mostrar los comentarios y haga clic en "Guardar configuracin" para activar los cambios.

Un comentario sobre las lenguas en general, concatenativa


No entiendo por qu alguien considera idiomas concatenativa ser adecuado para programacin de alto nivel. Cmo se supone que debes ser capaz de recordar o hacer un seguimiento de lo que est en la pila en un momento dado? Cmo se puede construir abstracciones encapsuladas reutilizables? No me puedo imaginar la escritura de cualquier programa no trivial en un idioma concatenativa; parece un tour de force comparable a escribir un programa de mquina de Turing, o directamente en el clculo lambda con De Bruijn ndices. Como lenguajes de bajo nivel para ser utilizado como una compilacin de los objetivos, no puedo ver el tipo de punto. Pero los entusiastas de Forth, etc, parecen estar diciendo que escribir programas no triviales de la aplicacin en estos idiomas. Cmo?
Por dhopwood el Mar, 09/08/2005 20:25 | conectarte o regstrese para enviar comentarios

Con pequeos y bien educados palabras


(Nota: No utilizar cualquier lenguaje de concatenacin, a pesar de que se utiliza un sistema HP-41 y ha programado) El truco es simplemente usar palabras pequeas, funcionales (es decir, efectos, pila locales). De esta manera, usted slo tiene que tratar a las palabras como cajas negras que llevan de entrada y dar salida, las funciones de alias. Cuando usted comienza a tener para hacer un seguimiento de la

pila, que es probablemente un buen indicio de lo que necesita para romper la palabra en un par de palabras ms intuitivos (o que su interfaz para que la palabra es muy complicado).
Por pkhuong el Mar, 09/08/2005 23:21 | conectarte o regstrese para enviar comentarios

Un montn de factoring
El truco est en las palabras pequeas, fcilmente comprobables. Cada vez que hacer un montn de manipulaciones de la pila es una seal de que necesita de refactorizar. "Pensar Forth 'tiene un montn de buenos consejos en este mbito: http://thinking-forth.sourceforge.net Pginas Manfred von Thun de Joy tambin trata este tema en algunos de los artculos. Su nica en la frmula cuadrtica, por ejemplo: http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-quadratic.html
Por doublec el Mar, 09/08/2005 23:28 | conectarte o regstrese para enviar comentarios

El ejemplo de segundo grado en que la ltima direccin URL es un buen ejemplo ...
... pero creo que solo apoya mi punto de vista. -B + / - sqrt (b ^ 2 - 4 * a * c) ----------------------------2 * a (Definir la frmula cuadrtica (Lambda (abc) (Let ([minusb (- 0 b)] [Radical (sqrt (- (* bb) (* 4 (* ac))))] [Divisor (* 2 a)]) que ([root1 (/ (+ minusb radical) divisor)] [Root2 (/ (- minusb radical) divisor)]) (Cons root1 root2))))) La estructura del cdigo Esquema refleja directamente la estructura de la frmula. De hecho, slo hay diferencias triviales de notacin entre la frmula y el cdigo, como podemos ver por escrito la frmula con las mismas variables auxiliares y en la notacin de prefijo: cuadrtica-frmula (a, b, c) def = (root1, root2) donde minusb = - 0 b radical = sqrt (- (* bb) (* 4 (* ac)))) divisor = * 2 un root1 = / (+ minusb radical) divisor root2 = / (- minusb radical) divisor Compare eso con la 2 versin de la Alegra (el que sugiri a ser "el mejor de los tres en la presente nota"): cuadrtica-2 == # abc => [root1 root2] [[[Pop, pop, 2 *] # divisor [Pop 0 swap -] # minusb [* Dup intercambio de resumen * 4 * - sqrt]] # radical [I] mapa] ternaria i [[[+ Swap /] # root1 [- Swap /]] # root2 [I] mapa]

ternario. ** Eh? ** Yo no creo que sea slo mi falta de familiaridad con la alegra que est en juego aqu: la estructura del cdigo es simplemente muy diferente de la frmula. Es imposible entender lo que este cdigo est haciendo sin pensar en cmo se utiliza una pila que no est presente en el problema original (y que pueden corresponderse con el problema de muchas maneras diferentes). La diferencia que ms se destaca es que ninguna de las variables se mencionan explcitamente los nombres de la derecha son slo comentarios. Estos comentarios no son equivalentes a la utilizacin de nombres en el cdigo - Qu pasa si se equivocaron? Las variables tienen que ser referido por el seguimiento de su posicin en la pila (que es lo que estaba recibiendo menos en la referencia a los ndices de De Bruijn por encima). Aceptar, es posible para mitigar este problema reduciendo al mnimo el tamao de la pila en un momento dado, pero el riesgo de cometer un error va a ser mayor para cualquier nmero dado de variables que en un programa equivalente que utiliza nombres. Una proporcin de esos errores no se pueden encontrar por medio de pruebas. Para que un programa grande, que es probable que se suman a un riesgo inaceptable. [Edit: corregido "0 - b" a "- 0 b"]
Por dhopwood el Mi, 10/08/2005 03:51 | conectarte o regstrese para enviar comentarios

Re: frmula cuadrtica


En Factor , la frmula cuadrtica es un problema que no. Slo integrar un lenguaje de dominio especfico para la aritmtica infijo y acabar de una vez. Es muy fcil escribir un lenguaje que compila a cdigo concatenativa que luego pasa por compilador nativo Factor. Ya existe, ver contrib / lgebra / en la distribucin. Acerca de la tasa de errores que son ms altos, y mi experiencia no es concluyente en ambos sentidos. He encontrado que el tiempo que va de sealar un error, para resolverlo, para arreglarlo es muy corto en el factor. Unidad de flujo de las pruebas con facilidad, y la depuracin es pura alegra con herramientas como nuestro inspector interfaz grfica de usuario, de un solo paso a paso, diversas herramientas de seguimiento y, por supuesto, el gran padre de todas ellas, el REPL. Muchos errores de la pila puede ser capturado de forma esttica, mediante el efecto chimenea inferencer. Debo admitir, he tenido unos cuantos de varios das de duracin maratones de depuracin, sin embargo, estos no tenan nada que ver con la pila, pero temas como la reubicacin de fallos en el generador de cdigo del compilador que lleva al cdigo sobreescrito o incorrectas reglas de reescritura en el optimizador , y as sucesivamente. Por otro lado, la otra pieza grande de software que mantener, jEdit est escrito en Java. Al igual que todo el software de Java, es un dolor de mantener, y siempre encuentro nuevos errores y misteriosos que son casi imposibles de resolver, debido a la no-interactivo de Java, la naturaleza esttica y laboriosos edicin / compilacin / ejecucin del ciclo.
Por slava el Mi, 08/10/2005 06:39 | conectarte o regstrese para enviar comentarios

No es el punto de
Slo integrar un lenguaje de dominio especfico para la aritmtica infijo y acabar de una vez. Y entonces usted no est escribiendo la aritmtica en un lenguaje de concatenacin. El ejemplo que acaba de pasar a utilizar la aritmtica, sin embargo, las crticas (por ejemplo, sobre el potencial error de la propensin de la pila de forma implcita la introduccin de variables sin nombres) no son especficos de este caso. Es muy fcil escribir un lenguaje que compila a cdigo concatenativa que luego pasa por compilador nativo Factor.

Derecho - Yo no he dicho que las lenguas concatenativa no son adecuados como lenguajes intermedios (aunque me pregunto por qu los usara en lugar de un IL basado en registros, si ests dirigidas a un equipo basado en registros). He encontrado que el tiempo que va de sealar un error, para resolverlo, para arreglarlo es muy corto en el factor. Y en los idiomas aplicativos, as, por lo general, si usted nota el error. ... jEdit est escrito en Java. Al igual que todo el software de Java, es un dolor para mantener, ... Yo sin querer defender a Java?
Por dhopwood el Mi, 10/08/2005 14:07 | conectarte o regstrese para enviar comentarios

"Orden superior" lenguaje intermedio?


Tal vez podra funcionar de esta manera, como algo ms fcil de razonar acerca de que una mquina de registro con sede en? Parece que hay unos cuantos idiomas formas ortogonales como Forth difieren de rgimen, por ejemplo: * Postfix vs prefijo * Pila de manipulacin de las variables con nombre frente a los * Dejando de lado de los parntesis (a pesar de que podra hacer para una mejor lectura) Esquema de idiomas y otros (C) Por supuesto, puede utilizar una pila, sino que slo se traten de manera diferente. [Cualquier persona que utilice un lenguaje que utiliza postfix variables con nombre?]
Por dbfaken el Mi, 08/10/2005 15:31 | conectarte o regstrese para enviar comentarios

Mnadas
Slo integrar un lenguaje de dominio especfico para la aritmtica infijo y acabar de una vez. Y entonces usted no est escribiendo la aritmtica en un lenguaje de concatenacin. Vamos a tratar de volver a escribir que un poco diferente ... Slo tiene que utilizar las mnadas (como un embebido de dominio especfico del lenguaje) para la secuenciacin de los efectos secundarios y que hacer con ella. Y entonces usted no est escribiendo funciones en un lenguaje puro.
Por Greg Buchholz el Mi, 08/10/2005 18:02 | conectarte o regstrese para enviar comentarios

Estoy de acuerdo
Estoy de acuerdo. El corolario a su punto acerca de los nombres es la presencia de todos los nombres en el cdigo de la alegra que nada tienen que ver con el dominio del problema, como el pop, intercambiar, DUP, y el resumen. Eso es un signo clsico de un lenguaje de bajo nivel (pensar en ensamblador). Adems, en los idiomas concatenativa, el programador depende en gran medida abstracciones que el lenguaje no cumplir ni siquiera saba acerca: cada vez que utilice el pop, que est haciendo una suposicin sobre lo que est en la pila, pero esa suposicin podra ser violada por relacin cdigo, como si de una expresin como: (Let ((x 33)) (Foo)

x) ... Podra devolver un valor distinto de 33, si foo tiene un error. Este tipo de efecto secundario que no es local para los insectos es una reminiscencia de algo as como un error de puntero. En otros contextos, este tipo de comportamiento se conoce como "inseguro": que una abstraccin no est protegido de la interferencia externa no autorizada. En el contexto de concatenacin, se puede jugar abogado de lenguaje y afirman que puesto que el lenguaje no tiene la idea de una unin local, no se est violando. Sin embargo, eso slo sera un buen argumento si los programadores no necesitan depender en gran medida de falsificacin de enlaces locales en los idiomas concatenativa.
Por Anton van Straaten el Mi, 08/10/2005 15:48 | conectarte o regstrese para enviar comentarios

Incorrecto
El corolario a su punto acerca de los nombres es la presencia de todos los nombres en el cdigo de la alegra que nada tienen que ver con el dominio del problema, como el pop, intercambiar, DUP, y el paquete de continuacin de Qu pasa con nombres como vamos, las etiquetas, de varios valores de vinculacin y setf en los programas comunes de Lisp? Tambin tienen nada que ver con el dominio del problema. Es el nivel CL baja ", como ensamblador"? Este tipo de efecto secundario que no es local para los insectos es una reminiscencia de algo as como un error de puntero.En otros contextos, este tipo de comportamiento se conoce como "inseguro": que una abstraccin no est protegido de la interferencia externa no autorizada. Factor tiene un efecto de chimenea inferencer que podra ser utilizado para comprobar que se loquesea, de hecho, devolver los valores correctos. Supongamos que escribir una funcin que toma un objeto mutable como entrada, y que viola su contrato por mutacin del objeto en un estado incorrecto, cmo es esto diferente de una funcin con un efecto de chimenea no vlida? Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos?
Por slava el Mi, 08/10/2005 16:11 | conectarte o regstrese para enviar comentarios

Incorrecto Correcto empuje del pop


Qu pasa con nombres como vamos, las etiquetas, de varios valores de vinculacin y setf en los programas comunes de Lisp? Tambin tienen nada que ver con el dominio del problema. Es el nivel CL baja ", como ensamblador"? En primer lugar, me refiero en parte a la frecuencia de ocurrencia. Cuando usted mira a una pieza de cdigo, lo que ms destaca? Si usted ve los nombres de muy pocos que se relacionan con el dominio del problema, y muchos nombres que se relacionan con una abstraccin en el lenguaje de programacin, eso es un problema para la comprensin del cdigo. En segundo lugar, muchos de los nombres que se mencionan se utilizan para introducir los nombres que son relevantes para el dominio del problema. Por lo tanto, siguen siendo ms significativa para el problema a resolver que los operadores de la pila son. Incluso las lenguas concatenativa reconocer que los nombres son importantes, ya que todos ellos admiten nombres globales. Qu lgica se utiliza para justificar la idea de que slo puede tener nombres globales? Todo se reduce a algo as como "porque es fcil de implementar un lenguaje de esa manera, y puede salirse con la suya y de trabajar alrededor de ella". Eso no es un buen argumento desde la perspectiva del usuario del lenguaje. Esto podra explicar por qu, como un implementador de un lenguaje, son entusiastas acerca de ellos, sin embargo.

Factor tiene un efecto de chimenea inferencer que podra ser utilizado para comprobar que se loquesea, de hecho, devolver los valores correctos. Podra ser utilizado? Esto implica que esto no es algo que ocurre de forma automtica, para garantizar de manera transparente la seguridad de los enlaces relacionados "" (o como quiera llamarlos en un idioma concatenativa). Si ese es el caso, entonces esto crea un rea que el usuario tenga que preocuparse, que de otro modo no hay que preocuparse. Es difcil ver cmo eso sera un beneficio. Supongamos que escribir una funcin que toma un objeto mutable como entrada, y que viola su contrato por mutacin del objeto en un estado incorrecto, cmo es esto diferente de una funcin con un efecto de chimenea no vlida? Es diferente en que la persona que llama sabe que ha pasado un objeto mutable a esa funcin - no es un caso en que una funcin esencialmente sin relacin puede destruir una abstraccin que no tiene conexin a. Sin embargo, es cierto que los efectos secundarios son una causa importante de problemas, por lo que las lenguas ms potentes proporcionan maneras de controlar los efectos secundarios. La funcin de todos los programas dependen implcitamente en una pila que puede ser directamente lado-a cabo mediante cualquier funcin es un paso atrs a este respecto. Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos? En primer lugar, es demostrablemente falsa de que son "exclusivamente el dominio de los acadmicos". Simple refutacin: Erlang se utiliza para ejecutar los interruptores de telecomunicaciones, y es un lenguaje puramente funcional. Hay muchos ms ejemplos de dnde sali eso. En segundo lugar, mientras que no se usan lenguajes puramente funcionales en todos los contextos, hago uso de las lenguas que fomenten y apoyen el control de los efectos secundarios, y lo hago escribir cdigo que evita los efectos secundarios en su mayor parte. Esto tiene numerosas ventajas demostrables - si usted desea comenzar a discutir en contra de ellos, me gustara sealar que aqu . Para que no se tomar como totalmente negativa sobre las lenguas concatenativa, debo mencionar que he escrito en Forth sistemas, experimentado con alegra, y ha escrito un intrprete de la alegra de juguete. Creo que son lenguajes muy interesantes, y puede ser divertido y til en ciertos tipos de aplicaciones, pero no creo que los idiomas pueden apoyar este tipo de afirmaciones que parecen querer hacer. Si el factor puede apoyar tales afirmaciones, eso sera interesante, pero no vamos a convencer a mucha gente por fanfarroneando acerca de ello y haciendo over-the-top crticas de otros idiomas y probadas.
Por Anton van Straaten el Mi, 08/10/2005 17:02 | conectarte o regstrese para enviar comentarios

Ms reclamos incorrectos
Qu lgica se utiliza para justificar la idea de que slo puede tener nombres globales? No estoy seguro de lo que la lgica se usa, porque yo no lo sigue. Factor tiene variables y espacios de nombres anidados y alcance. Saba usted siquiera se molest en mirarlo? Podra ser utilizado? Esto implica que esto no es algo que ocurre de forma automtica, para garantizar de manera transparente la seguridad de los enlaces relacionados "" (o como quiera llamarlos en un idioma concatenativa). Si ese es el caso, entonces esto crea un rea que el usuario tenga que preocuparse, que de otro modo no hay que preocuparse. Es difcil ver cmo eso sera un beneficio.

Yo podra aadir algunas herramientas para hacer esto automticamente para cada definicin de la palabra nueva. Para que no se tomar como totalmente negativa sobre las lenguas concatenativa, debo mencionar que he escrito en Forth sistemas, experimentado con alegra, y ha escrito un intrprete de la alegra de juguete. As que su nica experiencia con lenguas concatenativa es haber escrito un intrprete de la alegra del juguete? Si yo escribo un intrprete de lenguaje de juguete para un aplicativo simple, como esquema, a continuacin, observe que este intrprete de juguete es difcil de trabajar, debo concluir que lenguas aplicativos en su conjunto son intiles? Si el factor puede apoyar tales afirmaciones, eso sera interesante, pero no vamos a convencer a mucha gente por fanfarroneando acerca de ello y haciendo over-the-top crticas de otros idiomas y probadas. Mostrar donde he estado "fanfarrn" y haciendo over-the-top crticas.
Por slava el Mi, 08/10/2005 17:09 | conectarte o regstrese para enviar comentarios

No es / est tan ttulos son aburridas


No estoy seguro de lo que la lgica se usa, porque yo no lo sigue. Factor tiene variables y espacios de nombres anidados y alcance. Saba usted siquiera se molest en mirarlo? Bueno, tenemos que ser claros acerca de si estamos hablando de verdaderos idiomas concatenativa en general, o de factores, en particular. El debate se inici de forma ms general. He mirado brevemente los factores, hace unos meses, pero no he encontrado que suficientemente en cuenta el tipo de problemas que he mencionado. Ya que es todava en desarrollo, que va a ser interesante ver cmo resulta con el tiempo. En cuanto a mi punto de vista acerca de los nombres a nivel mundial, no me digan claramente. Lo que me refiero es que estos idiomas demostrar que reconocer que el apoyo a los enlaces seguros entre los nombres y los valores es importante, pero dentro de los procedimientos individuales, que no ofrecen esta caracterstica, es decir, temporales de nombre / valor enlaces que puede ser invocado por la vida de un procedimiento.Tiene factor de ofrecer esta caracterstica? Yo podra aadir algunas herramientas para hacer esto automticamente para cada definicin de la palabra nueva. Eso suena como si fuera a ser til. Una cuestin es que a medida que agrega caractersticas semejantes, puede llegar a un punto en el que se hace evidente que slo abordar el tema a nivel del lenguaje es una buena idea? Para que no se tomar como totalmente negativa sobre las lenguas concatenativa, debo mencionar que he escrito en Forth sistemas, experimentado con alegra, y ha escrito un intrprete de la alegra de juguete. As que su nica experiencia con lenguas concatenativa es haber escrito un intrprete de la alegra del juguete? Es su intencin de entrar en una discusin de definiciones ahora, y decir que no es Forth concatenativa? Si es as, entonces le sugiero que abandone el mal definido trmino "concatenacin" y cambie a algo as como "directos idiomas Manipulacin de la pila", lo que es ms significativo, en cualquier caso. Yo sostengo que la combinacin de mis experiencias con Forth y Alegra, combinado con el conocimiento de la semntica del lenguaje de programacin y experiencias con una variedad de otros idiomas, hace que sea posible para m hacer comparaciones y sacar conclusiones relacionadas con ciertas propiedades de estos idiomas. Mostrar donde he estado "fanfarrn" y haciendo over-the-top crticas. Ya he respondido a casos especficos, pero algunas de las citas en cuestin son los siguientes:

Este es un nivel ms rico de la funcionalidad que ofrecen la mayora de las lenguas tratadas en este sitio web incluso se podra concluir que las lenguas aplicativos son totalmente improductivos, y ni siquiera capaz de implementar sus propios compiladores nativos de optimizacin. Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos?
Por Anton van Straaten el Mi, 08/10/2005 19:55 | conectarte o regstrese para enviar comentarios

No eres ms que arrastre


Ya he respondido a casos especficos, pero algunas de las citas en cuestin son los siguientes: Usted ve, el primer post de este hilo era su reclamo despectiva que las lenguas concatenativa no son adecuados para el desarrollo de aplicaciones. En lugar de discutir el lenguaje Kont, se decidi para hacer de troll en su lugar. Usted ni siquiera se molestan refutar mis puntos, y en lugar de sacar conclusiones generales y las generalizaciones basadas en su experiencia limitada con la alegra y el Forth. S que no sera muy feliz si me fui y se meti en temas donde la discusin de la ML y Haskell est llevando a cabo, alegando que los tipos estticos es demasiado restrictiva. Entonces, por qu haces lo equivalente aqu?
Por slava el Mi, 08/10/2005 22:29 | conectarte o regstrese para enviar comentarios

De sustancias, por favor


Usted ve, el primer post de este hilo era su reclamo despectiva que las lenguas concatenativa no son adecuados para el desarrollo de aplicaciones. Yo le respond a otro puesto que hizo un punto que estaba de acuerdo con. Me dijo idiomas concatenativa muestran un signo particular de ser un lenguaje de bajo nivel. Creo que eso es cierto, y he defendido que en los intercambios con los dems. Ni siquiera estoy seguro de si est de acuerdo, ya que no se han ocupado de mis puntos, pero supongo que a partir de la reaccin general que no est de acuerdo. En lugar de discutir el lenguaje Kont, se decidi para hacer de troll en su lugar. No estoy de arrastre. Sin embargo, me di cuenta de que, en lugar de discutir el lenguaje Kont usted mismo, usted decidi hablar de factor. Estoy teniendo dificultades para discernir las reglas que creo que deberan seguir aqu. Usted ni siquiera se molestan refutar mis puntos Cules son los puntos? AFAICT, he respondido a cada uno de sus puntos fundamentales. Le sugiero que relea el hilo con ms cuidado. y en lugar de sacar conclusiones generales y las generalizaciones basadas en su experiencia limitada con la alegra y el Forth. Usted es libre de discutir mis puntos, con argumentos racionales, pero atacando a la academia, idiomas aplicativas, "la mayora de los idiomas discutidos en este sitio web", y los idiomas de investigacin no califica. Es particularmente irnico teniendo en cuenta que en su propio idioma que, segn algunas definiciones, califica como un lenguaje de investigacin. S que no sera muy feliz si me fui y se meti en temas donde la discusin de la ML y Haskell est llevando a cabo, alegando que los tipos estticos es demasiado restrictiva. Jeh. Su suposicin es incorrecta, en el supuesto de que lo que tienes que decir tiene ms base en el argumento racional que sus comentarios sobre esos idiomas hasta el momento.

Entonces, por qu haces lo equivalente aqu? Este es un foro para la discusin de los lenguajes de programacin. Estoy interesado en la semntica de los lenguajes de programacin, y entre otras cosas, como estos se relacionan con la semntica de la experiencia de usuario de programacin. Mi post original en este tema no fue pensado como una crtica de Kont especficamente (y, adems, Brian Mastenbrook ya ha escuchado mi opinin en este tema). Ms bien, fue un intento de caracterizar uno de los problemas semnticos que distinguen idiomas concatenacin de lenguajes basados en nombre con localesprocedimiento de abstracciones (tambin conocidas como variables locales). Creo que este tema es uno de los factores que subyace una reaccin negativa al usuario comn a estos idiomas. Como alguien que se interesa en los idiomas concatenativa, yo creo que es posible que tenga algunas ideas interesantes sobre el asunto, pero por alguna razn, usted no ha optado por ofrecer a aquellos que an no. Otras personas tienen, sin embargo, que es la razn por la que estoy aqu.
Por Anton van Straaten el Jue, 08/11/2005 00:20 | conectarte o regstrese para enviar comentarios

Permtanme reiterar las subestaciones


Permtanme reiterar los puntos substatiative, si usted no entiende Slava (a pesar de que l est siendo muy claro). En primer lugar, me gustara recordarles que no convencer a Slava o me deje de usar lenguas concatenativa porque somos perfectamente cmodo con ellos, no importa que tan bajo nivel que demuestran que ellos sean.

Me dijo idiomas concatenativa muestran un signo particular de ser un lenguaje de bajo nivel. No creo que los idiomas son ms concatenacin de bajo nivel. Acaban de manejar las cosas de manera diferente, de una manera que otros idiomas pasar a usar como una forma intermedia.Esto no significa en absoluto que es peor, slo diferentes. Si un compilador de C utiliza una forma intermedia que es la asignacin de un solo, eso significa que C de alto nivel que cualquier otro tipo de forma de asignacin nica? Por supuesto que no.

Ms bien, fue un intento de caracterizar uno de los problemas semnticos que distinguen idiomas concatenacin de lenguajes basados en nombre con locales-procedimiento de abstracciones (tambin conocidas como variables locales). Creo que este tema es uno de los factores que subyace una reaccin negativa al usuario comn a estos idiomas. Cmo son abstracciones variables locales? Podra explicar eso? En qu son ms abstractos y flexibles que los operadores de la pila arrastrando los pies?
Por Daniel Ehrenberg el Jue, 11/08/2005 17:33 | conectarte o regstrese para enviar comentarios

Referencia del pop sentido


En primer lugar, me gustara recordarles que no convencer a Slava o me deje de usar lenguas concatenativa porque somos perfectamente cmodo con ellos, no importa que tan bajo nivel que demuestran que ellos sean. Suena casi como si estuviera nerviosa que podra tener xito! ;) Pero lo que realmente no tienen ningn inters en cambiar su eleccin de lenguaje de programacin. No, el punto esentender ms acerca de las caractersticas del lenguaje de programacin y sus consecuencias.Por supuesto, esto puede llevar a cambios en las preferencias, el estilo de programacin, etc - la mina han cambiado mucho a lo largo de los aos. Pero lo ms importante, una buena comprensin de las caractersticas de PL y cmo afectan a los programas y la programacin puede hacer un mejor programador en cualquier idioma. Tambin puede ayudar en la evaluacin de la idoneidad de una

lengua en una situacin dada, y ayuda con la corte a travs de una campaa sobre los lenguajes de programacin especficos. No creo que los idiomas son ms concatenacin de bajo nivel. Acaban de manejar las cosas de manera diferente, de una manera que otros idiomas pasar a usar como una forma intermedia. Esto no significa en absoluto que es peor, slo diferentes. Esta cuestin en particular es diferente de una manera que sea ms bajo nivel. Esto ha sido cubierto en unos pocos puestos, y se conecta a la observacin de Alan Perlis "que bajo nivel implica que" los programas requieren la atencin a lo irrelevante ". Lo irrelevante en este caso es la manipulacin directa de pila, la que ha reconocido: "slo hay unos pocos casos en que la pila que realmente importa". Usted ha sugerido en otro lugar que lo mismo puede decirse de los nombres de lxico, pero he tratado este tema en mi respuesta a su ltimo punto ms adelante. Si un compilador de C utiliza una forma intermedia que es la asignacin de un solo, eso significa que C de alto nivel que cualquier otro tipo de forma de asignacin nica? Por supuesto que no. Cdigo C es, sin duda por lo general de ms alto nivel que cualquier interno de una sola asignacin de forma que es probable que ser compilado para. La forma en que se puede decir que es mirar a las operaciones adicionales que se estn realizando en el cdigo de un solo destino, para hacer frente a las limitaciones de la asignacin nica - el hecho de que esas operaciones adicionales no son necesarias en el cdigo fuente en C indica que son irrelevantes para la expresin del problema original, e indica que la forma de una sola asignacin es menor nivel. Si su preocupacin es con la generalizacin, es decir, extraer conclusiones acerca de "... cualquier tipo de forma de asignacin nica", eso no es lo que estoy haciendo. Estoy buscando a un determinado idioma que regularmente integrar directos operadores de manipulacin de la pila en su cdigo, y la conclusin de que esta es una manera de bajo nivel de lidiar con la abstraccin sobre los valores que el uso de nombres, en la mayora de los casos. Cmo son abstracciones variables locales? Podra explicar eso? En qu son ms abstractos y flexibles que los operadores de la pila arrastrando los pies? Una variable local es una abstraccin de la misma manera que cualquier nombre es una abstraccin. El nombre se refiere a otra cosa, y podemos usar el nombre y manipular lo que se refiere a sin saber qu cosa especfica que se refiere. Esa es la abstraccin. Cuando se presiona un valor en una pila y despus aplicar alguna operacin con ese valor, que es tambin una especie de abstraccin. Sin embargo, no todas las abstracciones son iguales.Manipulacin de la pila directa por s sola no escala muy bien, como todos los programadores concatenativa ya lo sabe. Puesto que los procedimientos se hacen ms grandes, hacer el seguimiento de los valores a travs de manipulacin de la pila se vuelve ms complejo. Al principio del hilo, se nos record que "cada vez que hacer un montn de manipulaciones de la pila es una seal de que es necesario refactorizar". Y qu haces cuando refactorizar? Para crear los nombres, los cuales (implementado correctamente) son ms escalables, ms seguras que las abstracciones almacenamiento de valores en una pila. As que una pregunta es, si los nombres son tan tiles e importantes cuando las escalas de cdigo ms all de un procedimiento nico, no podran ser de utilidad a nivel local, dentro de un procedimiento? Y, por supuesto, de muchos idiomas demostrar que son. Una de las caractersticas fundamentales de los nombres es que, efectivamente, puede servir a dos propsitos: 1. Un nombre acta como una forma de referirse a un valor, es decir, un nombre

2.

es una "referencia", o se puede decir que "denotan" cierto valor. Un nombre puede proporcionar informacin a los programadores sobre el valor, o su papel en el clculo, es decir, un nombre tambin puede tener un "sentido" o "connotacin".

Es por esto que tratamos de elegir los nombres de informativos, y por qu la expresin que se dio en otras partes de este tema fue compiling dataflow optimize linearize simplify generate , en vez de abcdef . Definido apropiadamente, ambas expresiones podran tener el efecto exacto de clculo misma, pero una es mucho ms significativo para nosotros que el otro. Cuando omitimos los nombres de nuestro cdigo, estamos omitiendo una de las pocas cosas que conecta el cdigo para el problema que est resolviendo, y eso es algo que debe hacerse cuidadosamente. Si una lengua comn obliga a omitir los nombres, eso es algo que vale la pena echar un vistazo de cerca. Usando una pila de valores abstractos casi nunca sirve al propsito segundos arriba. Sin embargo, nombrando a todos los valores no es necesaria o apropiada, ya sea. En la expresin antes citada, no es necesario nombrar a los valores intermedios, y la mayora de los idiomas admiten este tipo de omisin de nombres en una forma u otra. Sin embargo, la omisin de los nombres es una cosa y su sustitucin por los operadores que manipulan una abstraccin (la pila) que no es otra cosa directamente relacionados con la computacin es algo completamente distinto. Uso de los operadores de la pila barajar es un detalle de bajo nivel no importa cmo se mire, y siempre que ello sustituye el uso de nombres que podran ser informativo, que tiene una doble prdida de la capacidad del programa para expresar su algoritmo para el lector humano. En general, es difcil argumentar que la expresin de la mayora de los problemas en trminos de una pila LIFO, esencialmente es un enfoque que tiene sentido en s mismo. Los argumentos ms convincentes, creo, tienen que ver con los beneficios de ese enfoque de la implementacin del lenguaje: empujando parte del trabajo de arreglar el mecanismo de abstraccin en el usuario, el trabajo de la implementacin del lenguaje se ha simplificado, que puede tener varios beneficios . Estoy seguro de que hay mucho en lo que he escrito arriba de un ventilador concatenativa que objetar. Tenga en cuenta que he programado en estos idiomas, y estoy familiarizado con los argumentos. Yo podra hacerme: no es necesario debido a los nombres locales de factoring mantiene procedimientos lo suficientemente pequeas para ser manejables, no locales los nombres usados en conjuncin con la pila a nivel local funciona muy bien la mayor parte del tiempo, los idiomas que te obligan a nombrar por ejemplo, valores de la funcin de retorno slo para ser capaz de cambiar su posicin para pasar a otra funcin similar se oscurece la solucin al problema, etc etc Mi principal inters est en la caracterizacin de las diferencias de una manera ms especfica que la que acaba de decir algo en la lnea de " No me gusta esto ", o" Creo que esta es una forma divertida de programa ". Obviamente, hay diferencias, pero si no podemos describir con claridad el efecto que no sea simplemente hacer una lista de las caractersticas tcnicas, lo que implica un vaco en nuestra comprensin.
Por Anton van Straaten el Vie, 12/08/2005 03:55 | ingresar o registrarse para enviar comentarios

2Variable
En # comment-8723 , Antn escribi:

As que una pregunta es, si los nombres son tan tiles e importantes cuando las escalas de cdigo ms all de un procedimiento nico, no podran ser de utilidad a nivel local, dentro de un procedimiento? Y, por supuesto, de muchos idiomas demostrar que son. I'm not sure if the following quote supports your contention. I ran into it while skimming through Forth Sorts Fruit Down Under by Paul Frenger in ACM SIGPLAN Notices , vol. 40, nm. 8, aug 2005 (not yet online ): The CF [ColdForth] manual has other interesting chapters to describe its features: numbers/parameters (CF supports local variables via stack frames), the memory manager, I/O handling (mostly for accessing files), strings, object-oriented extensions, TCP/IP capability and the multi-tasker, for example.
By el-vadimo at Wed, 2005-09-14 12:11 | login or register to post comments

2drop
Even concatenative languages acknowledge that names are important, since they all support global names. What logic is used to justify the idea that you can only have global names? It boils down to something like "because it's easy to implement a language that way, and possible to get away with it and work around it". That's not a good argument from the language user's perspective. It might explain why you, as an implementor of such a language, are enthusiastic about them, though. Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. However, it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J? Simple refutation: Erlang is used to run telecom switches, and it's a purely functional language. Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-the-top criticisms of other, proven languages. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'.
By Mackenzie Straight at Wed, 2005-08-10 23:38 | login or register to post comments

Stacks of responses
Aside from the fact that several concatenative languages do actually support local variables Do those local variables protect against the issue I've raised, ie are they safe from being changed by unrelated stack operations? the reason they're frowned upon is because in most cases they are an impediment to proper factoring. Is that an issue specific to the concatenative languages? Otherwise, I find it hard to see how names are an impediment, in languages with traditional local variables. Can you give an example?

In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. I don't see how using "push" and "pop" to abstract values helps address that, in most cases. There are cases where a stack is certainly the right abstraction to use, of course. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. I didn't say that. I said that "it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects." I would include many imperative languages in that list, including eg C++, Java, Ada languages which provide qualifiers like "const" that allow the programmer to guard in a meaningful way against unwanted side effects, not just on individual variables but also over larger scopes, such as declaring that a method cannot side-effect the object it's associated with. Controlling of side effects is a major theme in very demanding systems, not just at the language level but in the design of systems. Functional languages just implement this at the language level on a more comprehensive basis. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? Hopefully my response above gives a better idea of what I was getting at. Obviously, "power" is one of those things, like "expressiveness", which is difficult to formalize satisfactorily, but it could perhaps be characterized as the ability to handle very demanding tasks, preferably demonstrated in practice. The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J? I consider tacit programming which has strong similarities to point-free style to be higherlevel than direct stack manipulation, for the reasons stated elsewhere in this thread. However, all of these things have pros and cons. The cons tend to have to do with being more difficult to reason about, especially as expressions scale up. Ideally, a language should be able to support a variety of styles, so that the appropriate one can be picked in a given situation. Of all of these, I don't see direct stack manipulation as one that's likely to be the best choice in most situations. On the subject of scaling up expressions, the traditional concatenative answer to that is that you should factor larger expressions into multiple words. However, that means you're relying on names for abstraction again, just on a scale greater than the local procedure. How does this reconcile with the idea that using local names is an impediment to refactoring? Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). Define it however you like very functional? Do you consider Haskell purely functional, even with the I/O monad and the things it can talk to? My real point was that a strong emphasis on avoidance of side effects is not something that's solely the domain of academics. I consider the definition of "purely" here to be a quibble eg anything that interacts with a traditional database is ultimately impure, when considered as a whole system. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. That's a fair point, although if you focus purely on how local value abstraction works, I can see his point. Having the option to use de Bruijn indices might be an improvement over having no alternative to direct stack manipulation, and I don't think it's over the top to say that. Also, the original poster did also make some other real points.
By Anton van Straaten at Thu, 2005-08-11 01:18 | login or register to post comments

Qu hay en un nombre?
Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring.

In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. Programmers, and typical programming styles in various languages, differ considerably in how many named intermediate variables they use (personally I find the number of intermediates in the above Scheme code a bit excessive). But it's generally agreed , I think, that introducing variables with useless names is bad style. In functional-applicative languages, it's almost always possible to avoid this. The need for temporaries with useless names, eg loop indices, in imperative-applicative languages (some more than others) is indeed a valid criticism of them.
By dhopwood at Thu, 2005-08-11 01:26 | login or register to post comments

A temporary by any other name...


The need for temporaries with useless names, eg loop indices, in imperative-applicative languages (some more than others) is indeed a valid criticism of them. I disagree that temporaries have "useless names". Loop indices may not be referenced other than as a control structure, but part of the essence of reasoning within an imperative paradigm is knowing the current state. Not being able to name part of that state is what I would consider to be a weakness. The only useless names I can think of is for function arguments that aren't used. And in better languages, you don't actually have to name those. ;>
By David B. Held at Thu, 2005-08-11 03:52 | login or register to post comments

Functional languages can "nam


Functional languages can "name part of the state". Do you consider not being able to not name part of the state a weakness? The only way to avoid it in many imperative languages would require programming in a dramatically different style than the the language supports.
By Derek Elkins at Fri, 2005-08-12 17:36 | login or register to post comments

Types?
Estoy de acuerdo. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp )? Or are all linear langauges low-level by definition? Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code That sounds like a plea for static typing. ...every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? Especially in the presence of macros? (apply quadratic-formula '(1 0 -4)) (apply quadratic-formula (reverse '(1 0 -4))

By Greg Buchholz at Wed, 2005-08-10 17:08 | login or register to post comments

More Linear
For better reference to linear lanugages (which also mentions the quadradic formula!) see Linear Logic and Permutation Stacks--The Forth Shall Be First
By Greg Buchholz at Wed, 2005-08-10 17:44 | login or register to post comments

Las diferencias
Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lowerlevel. That sounds like a plea for static typing. S, lo es. The question is what the costs of introducing it are. The costs of introducing a safe name abstraction are negative. The costs of introducing static typing are positive, and arguably not insignificant. In the static typing case, I find the costs high enough that I can easily justify using it in some cases, and avoiding it in others. In the case of relying primarily on direct interaction with a stack to store values, I can easily justify avoiding that in most cases. Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? It's different at least in the sense that that's a local effect: the error is confined to the interface between the caller and the callee, and doesn't compromise unrelated abstractions elsewhere in the program. I'm not saying that any language with named bindings solves all such problems, I'm just saying that there's a spectrum of safety and levels of abstraction, and that a language with named bindings which can make some guarantees about those bindings is both safer and higher-level than a language that doesn't have those features.
By Anton van Straaten at Wed, 2005-08-10 18:10 | login or register to post comments

low vs. lower


I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lowerlevel. Well I don't know if I can define "low-level" either, but of the cuff, I'd point to higher order functions and a cleanalgebra of programs as indicators of higher-levelness. Maybe the assembly language remark was aimed more at Forth and less at something like Joy ?
By Greg Buchholz at Wed, 2005-08-10 18:37 | login or register to post comments

The Joy of obfuscated abstractions


Well I don't know if I can define "low-level" either, but of the cuff, I'd point to higher order functions and a clean algebra of programs as indicators of higher-levelness. Maybe the assembly language remark was aimed more at Forth and less at something like Joy? I agree that HOFs and algebraic compability are high-level indicators. That moves a language like Joy up the spectrum I mentioned, but it doesn't necessarily make up for all of its problems. The

Joy code sample near the beginning of this thread provides an example of low-level code, by my definition and Alan Perlis'.
By Anton van Straaten at Wed, 2005-08-10 20:04 | login or register to post comments

Not low level


Though I don't particularly like that Joy code, concatenative languages do provide both of those things in your definition of high level languages. HOFs are ubiquitous in Joy, Factor and Kont, renamed combinators. They are used more frequently than HOFs in any applicative language I've seen. As for theory, there's tons of concatenative theory. Check out the Joy and Factor websites for more details, but really, there's tons of this stuff.
By Daniel Ehrenberg at Thu, 2005-08-11 21:43 | login or register to post comments

"Low level"
I'll admit I can't give a formal definition of "low-level" off the top of my head There's always Alan Perlis's definition: A programming language is low level when its programs require attention to the irrelevant. Although I don't think it's really completely correct to label languages high or low level. At most you can say that writing in a given language generally results in high or low level code . Anyway, by that definition, concatenative code is low-level because it forces you to write your solution in terms of a stack, despite the fact that there's no stack in your problem specification. Likewise, monads result in low-level code, because you're forced to think about category theory just to print "hello" :-)
By Dan Winship at Wed, 2005-08-10 19:30 | login or register to post comments

I agree with everything you w


I agree with everything you wrote. :)
By Anton van Straaten at Wed, 2005-08-10 20:05 | login or register to post comments

Analogy...
In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. Sounds a lot like type errors in dynamically checked languages... But now I guess I'm the one trolling... ;)
By Matt Hellige at Wed, 2005-08-10 17:31 | login or register to post comments

The semantics of Joy


Indeed, (let ((x 33)) (foo) x) can return something other than 33; specifically, it can return if foo does not terminate. Likewise, executing pop on an empty stack likewise produces . This interpretation depends on the view that the words of concatenative languages are functions from sequences to other sequences, which is Manfred's fundamental insight. Joy is different from other concatenative languages because it is functional: it is impossible to make any changes of state except through the file functions (the fact that read, write, etc. do not pop the file object off the stack constitutes the concatenative analogue of the I/O monad). Global words can be defined at read time, but cannot be changed at run time; thus they are analogous to function definitions rather than global variables.

By johnwcowan at Wed, 2005-08-10 20:40 | login or register to post comments

The Joy of the next 700 titles


Indeed, (let ((x 33)) (foo) x) can return something other than 33; specifically, it can return if foo does not terminate. True, but irrelevant, since this problem applies to all useful general-purpose languages, so doesn't affect a relative comparison of features between languages. Likewise, executing pop on an empty stack likewise produces Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated.
By Anton van Straaten at Wed, 2005-08-10 20:50 | login or register to post comments

Static vs. Dynamic


The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. Is that a problem caused by concatenativityness (!), or by dynamicism? If your language allows you to push 1 or 2 things on the stack, depending on a conditional, then I'll agree it has the potential to become a mess. But it seems like you should be able to prevent getting "out-of-sync" with static analysis. Like requiring both branches of an "if" statement have the same stack effect. Me estoy perdiendo algo?
By Greg Buchholz at Wed, 2005-08-10 21:05 | login or register to post comments

Static analysis
The Factor language apparently includes a stack effect inference tool which performs such analysis. The documentation mentions combining unit testing with stack effect inferencing to check the stack effect of procedures (words). This seems like a lot of trouble to go to to achieve a kind of safety that other languages get for free. I'm not saying that this issue alone makes concatenative languages unfit for human consumption. However, I think the negative reaction to the tractability of these languages that's been expressed by some posters in this thread is not an uncommon one, and I've attempted to identify a couple of the semantic issues which underly this reaction. In this particular case, I'm saying that even if you get past the lack of procedure-local name/value bindings, that the approach these languages take to local value abstraction is less safe and requires more care than languages with name-based local abstraction.
By Anton van Straaten at Wed, 2005-08-10 22:35 | login or register to post comments

empty stacks
Likewise, executing pop on an empty stack likewise produces

Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated.

There are no problems at all about empty stacks. Factor solves the stack underflow problem by mprotect ing (or something like that) the memory directly underneath the stack, triggering a segfault if there's underflow. Should this happen, the segfault can be caught like any other exception, and it the toplevel of the repl, it's caught automatically and printed out, also like any other exception. All this stuff about is useless outside of research. An infinite loop has a very different outcome than stack underflow.
By Daniel Ehrenberg at Thu, 2005-08-11 17:50 | login or register to post comments

Not just low-level and unsafe, but complex


The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Actually I think the more important point is why you should have to use six of these (pop, swap, dup, rollup, map, ternary) in such a simple example.
By dhopwood at Thu, 2005-08-11 00:48 | login or register to post comments

The original thread


As long as we're alread discussing it, we might as well mention the original thread that spawned the quadratic formula article. Also note a previous discussion on point-free programming on LtU. And some might be interested in Pointless Haskell .
By Greg Buchholz at Thu, 2005-08-11 01:45 | login or register to post comments

Effects&type system for Scheme? :-)


I guess you meant that though it's absolutely possible to have a PL in which the value of (let ((x 33)) (foo) x) is not 33 (eg, because of effects - exceptions, nontermination, continuations, etc.), such a PL is less safe than reasonable? Here is another example of non-local bugs: a badly-formed HTML post in a naive weblog system can break the whole page.
By Andris Birkmanis at Thu, 2005-08-11 17:03 | login or register to post comments

No special machinery required


I guess you meant that though it's absolutely possible to have a PL in which the value of (let ((x 33)) (foo) x) is not 33 (eg, because of effects - exceptions, nontermination, continuations, etc.), such a PL is less safe than reasonable? No, I meant that assuming normal completion of foo, one can rely on a value of 33 being returned from that expression. If foo has an exception, that's fine, and presumably happens for a reason. The case I'm pointing to as undesirable is if the expression ends up returning 42 or the string "gotcha" instead, ie the program produces an incorrect result. Here is another example of non-local bugs: a badly-formed HTML post in a naive weblog system can break the whole page. S. No-one in their right mind would defend such a system. ;)
By Anton van Straaten at Thu, 2005-08-11 17:17 | login or register to post comments

there's nothing to stop you (


there's nothing to stop you (or rather, a language implementer) from labelling items on the stack. in otuto you'd have something like

{ quadratic | abc } { roots ~ b sqrt donde { roots | xyz } => which seems clear enough,

=> * 4 * ac * bb * 2 a } { / + xyz / - xyz } apart from the RPN.

disclaimer - i can't remember the exact details of otuto v1, and v2 is stalled until i gain enough patience to return to the uml tool (i'm trying to implement it in c++ for various (vaguely financial) reasons).
By andrew cooke at Wed, 2005-08-10 16:50 | login or register to post comments

no se puede. scopes are not concatenative


Once you add names then the code is no longer concatenative. For example if you were to add names for items on the stack as an extension of the Joy language, then each quotation would need a pointer to its lexical scope. You can no longer compose functions by concatenating lists of words because the two quotations may come from different lexical environments. [edit: otuto is very interesting, by the way. I hadn't realized what language your example was from.]
By James McCartney at Wed, 2005-08-10 19:08 | login or register to post comments

Scopes are concatenative


If quotations were first-class types in Joy, rather than lists of words, they could hold a pointer to their lexical context. Appending two quotations could be implemented with a 'cord' data structure, that simply holds references to both quotations, and when evaluated, iterates over the first one, followed by the second one, in each case using the correct lexical context. Concatenative languages (and Lisp, where lambdas are first-class too) are not all about cons cells and lists.
By slava at Wed, 2005-08-10 20:01 | login or register to post comments

concrete example
OK perhaps you can have hidden lexical scope pointers and use a cord. But this seems to create a pretty confusing language. For a simple example suppose we have a Joy-like language plus an operator 'let' that takes a value, a quoted symbol, and another quotation which is evaluated in an environment where that symbol is bound to the value. 2 [x] [x 1 +] let so that the above returns 3. ahora: 1 [x] [[x]] let 2 [x] [[x +]] let concat this returns the concatenated function: [xx +] however each x is bound to a different value. [xx +] i returns 3
By James McCartney at Thu, 2005-08-11 19:13 | login or register to post comments

there is no lexical scope dur


there is no lexical scope during evaluation. that syntax is a very simple "template" for rewriting the stack (actually, stacks). so you'd define dup as: { dup | x } => { | xx } there's some syntactic sugar involving implicit quoting that means you can write that as

{ dup | x } => { xx } but that's beside the point. all that the otuto definitions describe is how you rearrange the stack(s). you could implement them by cross-compiling to an intermediate representation without variable names (using swap, roll etc etc) (i believe - i've never bothered to prove it, but i would be amazed if it were not true). i'm not 100% convinced i understand what concatenative is, so i can't say otuto is concatenative, but i'm pretty sure that those variable names are just sugar. a long time back, when i mentioned this on the joy email list, someone added something similar to another language there.
By andrew cooke at Fri, 2005-08-12 22:01 | login or register to post comments

otuto
As I understand it, if a language is concatenative then functions are sequences of operators where you can take any function and split it at any point and you have two valid functions, or you can join two functions and have a valid function. I'm not sure what it would mean to split or join functions in otuto, since they are rules with 4 separate sections, not simple sequences of operations.
By James McCartney at Sat, 2005-08-13 16:32 | login or register to post comments

well, the rules are restricte


well, the rules are restricted - they always match a single value on the left hand stack. so the top value on that stack can be interpreted as the current instruction. if you then take the right hand stack as "values" then you can think of a rule as saying how to apply the intruction (top of the left stack) to the values. ok, now if you continue with that, a definition like: { foo | a } => { bar baz | aa } is equivalent to defining foo as bar baz dup in a concatenative language. now if you accept, as i argued above, that the variables are just syntactic sugar, then that implies, i think, that there is a simple mapping from otuto to concatenative languages. (this breaks down with meta-programming, though, since you can change the assumptions made above).
By andrew cooke at Sun, 2005-08-14 01:43 | login or register to post comments

concatenative is ..
A language in which the concatenation of programs denote the composition of functions has been called a concatenative language. I first heard this terminology from Billy Tanksley. -- Manfred von Thun otuto is concatenative. so is XY . Here's one version of quadratic in that language, which uses names to similar effect (that is, to map items from the stack to multiple locations in the body of the code): ; quadratic { [abc] b -: ac * 4 * b 2 ^ -. .5 ^ [1 -1] * + a 2 * % } ; regarding your proposal for cross-compilation, i think it would be sufficient to compile programs like quadratic , in which names are used to map items from the stack to multiple locations, to programs which consist of an initial reordering of items on the stack, followed by code which is name-free. Billy Tanksley has suggested a notation (supported in XY) in which an initial "shuffle-pattern" is followed by name-free code. For example, compile

{ [abc] ab * ac * + } a abc--abac * [*] dip + [edited to show an example of compilation from pattern-notation to shuffle-notation]
By sa at Sat, 2005-08-13 23:37 | login or register to post comments

Monads as concatenative PLs?


If we squint our eyes enough to see bind as concatenation, then how are monadic expressions not concatenative? Say, in imperative PLs one can concatenate statements (usually using ';'), and quite often "imperative" means working in some compound monad. Well, I might understand "composition of functions" a bit wrongly...
By Andris Birkmanis at Mon, 2005-08-15 06:03 | login or register to post comments

Squinting
I don't think squinting is necessary at all. In fact, lately I've been thinking that there might be a benefit for newbies by writing a monad tutorial using a language like Joy. Subroutines (quotations) in Joy are lists of functions and you interpret them by folding in a 'compose' and applying to a stack... fold compose id [1 4 dup * +] Replace 'compose' with 'bind' for your favorite monad, and I think you might be on your way to removing much of the mystery behing monads (or at least you've given them an opportunity to see monads that aren't tied up in Haskell syntax and type-classes).
By Greg Buchholz at Mon, 2005-08-15 14:25 | login or register to post comments

They recently made much more


They recently made much more sense to me when I combined the two pieces of information that monads are (1) an algebraic structure that (2) forces a particular evaluation order. None of the tutorials on the web I have read, even I think some Simon Peyton-Jones stuff really mention both of those facts, then illustrate with reference to examples like lists with mapping, and factorial, which are familiar functions that, ignoring associativity information, in their common recursive definitions have a defined evaluation order.
By Marcin Tustin at Mon, 2005-08-15 22:57 | login or register to post comments

Looks applicative to me
I don't know whether otutu as a whole is applicative or a hybrid, but the above code is definitely applicative. (RPN vs prefix vs infix is a separate, and much less significant, issue.)
By dhopwood at Thu, 2005-08-11 01:48 | login or register to post comments

The code is not applicative,


The code is not applicative, it's just reverse RPN (just polish notation) and it uses named variables. There is still a stack.
By Daniel Ehrenberg at Thu, 2005-08-11 16:14 | login or register to post comments

Re: A comment about concatenative languages in general


I consider concatenative languages suitable for high-level programming, in fact I am the lead developer of the Factor programming language. As always, the burden of bridging the gap

between the programmer and the language lies squarely on the development tools at one's disposal, and the simplicity of concatenative languages allows one to develop such tools more easily than with other languages. A meta-circular interpreter is a page of code in Factor, and it is not an academic exercise; it is used to implement a single-stepper. I'd like to see a single-stepper in another language that is equally clear. There is a static stack effect inferencer tool, also used by the compiler. The compiler takes maximal advantage of the simple semantics to implement the concatenative equivalent of "lambda lifting", and various other optimizations such as partial evaluation. I'm not sure what you mean by "How can you build reusable encapsulated abstractions?". If you look at Factor's library, you will see a very high degree of abstraction and code reuse. There is no "boilerplate" code. Re: "writing a Turing machine program"; after working with Factor's libraries for sequences and mathematics, going back to a language like Java with annoyingly hand-coded boilerplate loops seems more like a Turing machine. Your emphasis on the word "seem" suggests that you don't believe that concatenative languages are used to develop non-trivial applications. Well, notwithstanding the clear fact that there is enough commercial demand to keep several Forth vendors in business, Factor has an HTTP server with a continuation-based web framework, a GUI toolkit-in-progress, various development tools, and more. This is a richer level of functionality than offered by most languages discussed on this web site, so ironically, one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers.
By slava at Wed, 2005-08-10 06:30 | login or register to post comments

Re: A comment about concatenative languages in general


... the simplicity of concatenative languages allows one to develop such tools more easily than with other languages. While it's true that there are few complex concatenative languages, there are plenty of equally simple applicative ones. I'm not particularly interested in comparing, say, Factor with Java; I'm interested in the effects of concatenative vs. applicative semantics for otherwise comparable languages. [...] an HTTP server with a continuation-based web framework, a GUI toolkit-in-progress, various development tools, and more [...] is a richer level of functionality than offered by most languages discussed on this web site Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. ... one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. This is just trolling.
By dhopwood at Wed, 2005-08-10 14:31 | login or register to post comments

Irrelevant

Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. "Research language" is a euphemism for "useless language". I'm really only interested in practical concerns. I write code, and I like to write stable code quickly, and I find that concatenative languages let me do this just as easily as applicative ones, with the additional advatange that writing tools is easier. You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me , thinking in terms of a stack is not a problem.
By slava at Wed, 2005-08-10 16:08 | login or register to post comments

For me...
You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. Tal vez. However, the next person who has to deal with your code may indeed have a problem. I could truthfully say "for me, dealing with unrestricted pointers in C is not a problem", but the broader consequences of everyone holding that opinion are not pretty.
By Anton van Straaten at Wed, 2005-08-10 16:13 | login or register to post comments

Research is a Good Thing


"Research language" is a euphemism for "useless language". And where do you think the ideas for features in 'mainstream' languages come from?
By dhopwood at Thu, 2005-08-11 02:07 | login or register to post comments

things Factor can do


While it's true that there are few complex concatenative languages, there are plenty of equally simple applicative ones. I'm not particularly interested in comparing, say, Factor with Java; I'm interested in the effects of concatenative vs. applicative semantics for otherwise comparable languages. The point is, you can't do a lot of the stuff Factor does in an applicative language. I can't imagine Factor being applicative. The closest parallel I can find is Common Lisp, but there are many things about Factor that make it much better. One example that comes to mind is the word watch. The implementation isn't more than 3 or 4 lines, and what it does is it prints out the name of the word and the contents of the datastack when the word is called. It's been really useful for me and I've been able to get rid of all the debugging prints in my code. One time, I needed to see what was on the stack when leaving the word, so I made a new watch-after thing. It also only took 4 lines and not much thought. In Common Lisp, this would probably take the form of a new macro for when defining a function or macro, but that technique is less flexible. In Haskell or ML, making a debugging tool like this is even harder, involving complicated templating libraries.

Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. I, unlike Slava, am OK with research languages. They're good for research. As lothed as Slava is to admit it, Factor isn't really suitable for most applications yet. Of course, it is intended for use, but

Factor's libraries aren't mature enough, especially crucial ones like the GUI library. Factor has a pretty good FFI, but that's not because it's concatenative. We really shouldn't discuss this, though, you're right.

This is just trolling. [about applicative languages being nonproductive] S


By Daniel Ehrenberg at Wed, 2005-08-10 19:38 | login or register to post comments

Fun with stacks


One example that comes to mind is the word watch. The implementation isn't more than 3 or 4 lines, and what it does is it prints out the name of the word and the contents of the datastack when the word is called. Obviously, anything which involves dealing directly with the stack will be easiest in a language which gives you direct access to the stack. The question is how many things fall into this category, and whether they're worth the consequences of allowing direct manipulation of the stack, and in what situations. For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages. BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, ie both the entry and exit from a traced function.
By Anton van Straaten at Wed, 2005-08-10 22:48 | login or register to post comments

Re: Fun with stacks


The question is how many things fall into this category, and whether they're worth the consequences of allowing direct manipulation of the stack, and in what situations. That's actually a good point, but let me turn that question around: how many things fall into the category where you need lexically named variables to do certain things? I used to think many, but you can really do everything without them (but for a few circumstances, dynamically scoped variables help). The concatenative paradigm is not just a way to manipulate the stack directly, it's a whole different paradigm of programming where some things are more elegant and others less. After a while, it tends to feel like most things are more elegant, but from your perspective, everything will look awkward at first. For an example of something which comes out really great, here's the heart of the Factor compiler: M: compound (compile) compiling dataflow optimize linearize simplify generate ; Yes, I know that's a terrible argument, but I don't care. It's a reason I use Factor anyway. At this point, rather than feeling that concatenative languages are backwards, I feel like they're in the exact right order. They do reflect the order of evaluation, however important or unimportant that is.

For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages.

Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative.

BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, ie both the entry and exit from a traced function. Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word ( \ wordname watch ). One thing about Factor that's great but unrelated to its concatenativity is that you can mutate stuff like word bodies, though you don't normally do this outside of debugging. There's probably an applicative language that does that.
By Daniel Ehrenberg at Thu, 2005-08-11 02:24 | login or register to post comments

Fun with folds


how many things fall into the category where you need lexically named variables to do certain things? "Need"? Ninguno. However, the number of situations in which having lexically named variables is useful is high, IME. I used to think many, but you can really do everything without them Similarly, that you "can" do everything without them is not what I'm concerned with. Turing completeness tells me it's possible, the more important questions are how practical it is, how easy it is to deal with code written by other people, etc. The concatenative paradigm is not just a way to manipulate the stack directly, it's a whole different paradigm of programming where some things are more elegant and others less. After a while, it tends to feel like most things are more elegant, but from your perspective, everything will look awkward at first. I implemented and maintained Forth programs for a couple of years. It was fun, but part of the reason I used Forth was because it was in a small, embedded-like control environment, where the alternatives were languages like C and assembler. The points I've raised are not about things being backwards or seeming awkward. For an example of something which comes out really great, here's the heart of the Factor compiler: M: compound (compile) compiling dataflow optimize linearize simplify generate ; I agree, it's great when something works out in a way that seems intuitive. However, my experience is that different things can seem intuitive for different problems. If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, eg in Scheme: (concatenate input compiling dataflow optimize linearize simplify generate) Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. It'd be interesting to see a Factor (or Kont?) version of that program, for comparison.

Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative. Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ wordname watch). The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name) . No alteration of the original code is required.
By Anton van Straaten at Thu, 2005-08-11 04:04 | login or register to post comments

Concatenar
If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, eg in Scheme: (concatenate input compiling dataflow optimize linearize simplify generate) How is this any different from embedding an infix language in Factor? The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name). No alteration of the original code is required. But the implementation of Common Lisp's trace is more complicated than the Factor equivalent.
By slava at Thu, 2005-08-11 04:45 | login or register to post comments

I don't care enought to make a good title


"Need"? Ninguno. However, the number of situations in which having lexically named variables is useful is high, IME. Please provide me with a situation where the solution with variables is much better. I know one, math, but that situation seems to be isolated, so I made a minilanguage (which I'm currently redoing) to use infix syntax for math. Do you think this is a more general problem than that? I'm seriously interested in this.

(concatenate input compiling dataflow optimize linearize simplify generate) I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp...

Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. But then there are properties of Kont and Factor (but not Forth) that lend it to more abstraction, like the use of combinators to replace macros in languages like Lisp and built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation,

and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though.

The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name). No alteration of the original code is required. En serio? Oh, ok, then never mind that point. I don't have much experience with Lisp development tools.
By Daniel Ehrenberg at Thu, 2005-08-11 16:07 | login or register to post comments

This is not a title


Please provide me with a situation where the solution with variables is much better. I know one, math, but that situation seems to be isolated, so I made a minilanguage (which I'm currently redoing) to use infix syntax for math. Do you think this is a more general problem than that? I'm seriously interested in this. Yes, it's a much more general problem. See my other longer response, entitled "reference sense pop". I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp... Uh-oh, you're showing signs of being infected by stack-oriented thinking, which is one of the problems that can be caused by using a language which forces you to think in terms of its implementation details all the time. It doesn't require "each of those functions take a stack and return a stack" for a start, procedures in most applicative languages can receive multiple values using ordinary named parameters, and multiple values can typically be returned in various ways, such as lists, tuples, or other kinds of collection. A Scheme version which uses lists applied to ordinary procedures with named arguments, is as simple as this: (define (concatenate input-args . procs) (fold apply input-args procs)) Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. Named variables are certainly not irrelevant to abstraction. Again, see my other reply. But then there are properties of Kont and Factor (but not Forth) that lend it to more abstraction, like the use of combinators to replace macros in languages like Lisp and built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation, and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though. I agree, those kind of high level features (borrowed from functional programming! ;) are useful, and certainly help. However, they work just as well in the presence of lexical names. (There's a difference in the way inlining tends to work, but the end result is similar.)
By Anton van Straaten at Fri, 2005-08-12 04:55 | login or register to post comments

Infectado
Uh-oh, you're showing signs of being infected by stack-oriented thinking,

What a great way to refute somebody's point! This is just like putting your hands over your ears, and saying "neener, neener, I'm not listening". Perhaps you've been infected by Scheme-oriented thinking?
By slava at Fri, 2005-08-12 18:15 | login or register to post comments

Don't stop reading yet...


The sentence after the one you quoted contains the refutation. Regarding the "infected" comment, I hope LittleDan didn't take that too personally, it was certainly intended lightheartedly. I should have put a smiley after it. However, it's also a serious point. When you find yourself saying things like "The only way to make it work would be to have each of those functions take a stack and return a stack", talking about such a specific structure as a stack when the problem is in fact a much more general one, that's a clear sign that you're thinking too much in terms of implementation and too little in terms of specification. It's an easy mistake to make, and I suspect we all do it to some degree from time to time. However, there's an additional risk in using a language which forces you to think in terms of a single datastructure everywhere, which is that you start conceiving of all problems in terms of that datastructure. I'd be especially concerned about exposing inexperienced programmers to languages that require this kind of thinking, because you'd be ingraining bad habits in them from the very beginning.
By Anton van Straaten at Fri, 2005-08-12 19:02 | login or register to post comments

The Swine Before Perl


Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. It'd be interesting to see a Factor (or Kont?) version of that program, for comparison. I started to implement that Swine Before Perl program, but after doing the non-macro part, which came out a bit cleaner than the Scheme program IMHO, I really felt like this is enough. What benefit do you derive from making that a macro? In Factor, syntax extension isn't as common as in Scheme because it's not needed as much, and in Kont, there is no syntax extension. If you're making that macro for a speed boost, that's just showing that you have a bad implementation of the language. I still made a seperate syntax for defining the state machine, though. El cdigo est por debajo. IN: state-machine USING: kernel hashtables sequences lists ; SYMBOL: init SYMBOL: loop SYMBOL: end : c[ad]+r #! Initial datastructure {{ [[ init {{ [[ CHAR: c loop ]] }} ]] [[ loop {{ [[ CHAR: a loop ]] [[ CHAR: d loop ]] [[ CHAR: r end ]] }} ]] [[ end {{ [[ CHAR: r end ]] }} ]] }} ;

: walk-states ( seq states -- ? ) init swap rot [ >r ?hash r> swap ?hash ] each-with end = ; : {| POSTPONE: {{ : :: POSTPONE: {| : -> ; parsing : || POSTPONE: ]] : ;; POSTPONE: ]] : |} POSTPONE: ]] swons ; parsing POSTPONE: [[ ; parsing ; parsing POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: }} \ walk-states

: final-machine {| init :: CHAR: c -> loop ;; loop :: CHAR: a -> loop || CHAR: d -> loop || CHAR: r -> end ;; end :: CHAR: r -> end |} ; Is there something wrong with that code? It seems to be a bit shorter than the Scheme code. Edit: as Slava correctly pointed out on the #concatenative channel, those cosmetic macros are completely stupid and the first version is really the one that should preferably be used.
By Daniel Ehrenberg at Thu, 2005-08-11 19:43 | login or register to post comments

btw
here's how to activate it "cadar" final-machine o "cadar" c[ad]+r walk-states

By Daniel Ehrenberg at Thu, 2005-08-11 20:18 | login or register to post comments

too much code


dan, that's way too much code. so is krishnamurthi's scheme implementation. a state machine is a matrix. rows are states and columns are inputs. in this case, the machine has four rows corresponding to the four states: error, init, more, end; and 256 columns, corresponding to possible ascii inputs. if the machine is in state i and sees input j, then it should enter state k. using two dimensional scalar indexing, k = machine[i;j]. the algorithm should consume inputs and index out the next state, leaving the sequence of states visited as its result (a vector of indices). repeated indexing in k is implemented as: result:index matrix\indices es decir, k0:matrix[index;indices 0] k1:matrix[k0;indices 1] k2:matrix[k1;indices 2]

: in k, the algorithm is realized as: s:@'[4 256#0;_ic("";"c";"adr";"r");:;(!0;2;2 2 3;3)] m:1 s\_ic por ejemplo, m'("car";"cdr";"cadr";"cddr";"cdar";"caddr") (1 2 2 3 1 2 2 3 1 2 2 2 3 1 2 2 2 3 1 2 2 2 3 1 2 2 2 2 3) a 0 in the return vector indicates that m has entered the error state. on my pentium 4, 3 ghz machine timings are: 10k elements: 0 ms 100k elements: 15 ms 1m elements: 187 ms state-machine is just a special case of index-scan, which in turn is a special case of scanning a function on its input. if you understand +, and you understand summation as repeated application of +, and you understand indexing, then it's a small step to knowing how to code up a statemachine.
By sa at Fri, 2005-08-12 15:52 | login or register to post comments

a state machine is a matrix?


a state machine is a matrix ?
By Michael Walter at Fri, 2005-08-12 18:19 | login or register to post comments

FSMs are graphs, which have m


FSMs are graphs, which have matrix representations. Sloppy language, but hey.
By Philippa Cowderoy at Sat, 2005-08-13 11:29 | login or register to post comments

but what are they really?


hey indeed. my point was that recursive 2d indexing captures the behavior of fsms, no matter what they "really" are, if, indeed, they are anything at all beyond how they are represented in different programming languages. no need to follow that long and winding road into the metaphysics of abstract objects. obligatory reading: paul benacerraf, "what numbers could not be."
By sa at Sat, 2005-08-13 22:30 | login or register to post comments

Repent, sinners

# Include int main(int argc,char**argv) { char S[4][256]={{0},{['c']=2},{['a']=2,['d']=2,['r']=3},{['r']=3}},s=1;

if(argc Simplicity > Abstraction.


By Mackenzie Straight at Fri, 2005-08-12 17:16 | login or register to post comments

Understandable? Flexible?
That's great guys, unfortunately, it seems to miss the point fairly completely. Recall, Anton's pointer to the talk said (emphasis mine): Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable . Neither the C, K, nor Factor code is quite as understandable as that presented in slide 38 of Shriram's talk. What's more (and probably more important), is that neither is as flexible as what he presents either. What he's shown [1] is a simple syntax extension for generating (fairly efficient) automata from these simple, clear descriptions via macros. Given the spec for another automata, your solutions would have you hard-coding your new state tables while Shriram's would have you quickly done with the code and onto some other problem. [1] At least that's what I believe he's shown... I'm only working with the slides, and I know just enough Scheme to get myself in trouble -- feel free to taunt me if I'm wrong. One of these days I'll have to get over the Lisp allergy I picked up when I was a wee lad and really give Scheme a good look. -30 By William D. Neumann at Fri, 2005-08-12 17:50 | login or register to post comments

Flexible?
The Factor code that Dan showed is equivalent to the Scheme code. I'd like you to explain why you think that this: (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end)) Any more flexible than this: : c[ad]+r {{ [[ init {{ [[ CHAR: c loop ]] }} ]] [[ loop {{ [[ CHAR: a loop ]] [[ CHAR: d loop ]] [[ CHAR: r end ]] }} ]] [[ end {{ [[ CHAR: r end ]] }} ]] }} ; The Factor version has a bit more punctuation, because Dan did not extend the syntax; he just used Factor's hashtable literals. Also he uses character literals instead of symbols. And for what its worth, I consider the omission of hashtables from the Scheme standard to be an unacceptable limitation. I don't like the way typical Scheme code is written, using lists (including alists) and symbols for everything. It might look pretty but it just doesn't scale as well as other data structures for most tasks.
By slava at Fri, 2005-08-12 18:00 | login or register to post comments

Faster than I am...


I was just about to leave a note that the factor version may be just as flexible, because I am completely unfamiliar with it, and I couldn't figure out exactly what was going on. I more or less figured out what he was doing after I posted the message. I stand corrected there... -30 By William D. Neumann at Fri, 2005-08-12 18:08 | login or register to post comments

More confusion?
Unless I'm confused (which happens on a regular basis), I believe you're wrong. The Factor equivalent of this (from pg. 38)... (automaton init (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end))) ...looks like... : final-machine {| init :: CHAR: c -> loop ;; loop :: CHAR: a -> loop || CHAR: d -> loop || CHAR: r -> end ;; end :: CHAR: r -> end |} ; ...not a lot of difference to my eyes.
By Greg Buchholz at Fri, 2005-08-12 18:00 | login or register to post comments

understandable. flexible.
i don't think i missed either point. for a machine with n states, use an nx 256 matrix m. for each (state : input -> nextstate), assign m[state;input]:nextstate to run on input i: r:1 m\i setting up a particular machine can be done line-by-line, as in the scheme example, or it can be done all at once, as my example shows. no macros, no special syntax. if state-machines are matrices, and recognition is recursive twodimensional indexing, then implement it as such, as directly as possible.
By sa at Fri, 2005-08-12 18:55 | login or register to post comments

Sencillez
This particular task does not even call for a state machine at all: : swine ( sequence -- ? ) "c" ?head >r "r" ?tail r> and [ "ad" contained? ] [ drop f ] ifte ;

By slava at Fri, 2005-08-12 17:52 | login or register to post comments

... one could even conclude t


... one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. This is just trolling. It's hyperbole to show up the fact that there are different expressivity/productivity/comprehensibility wins with concatenative and applicative languages. Let's not start pointing fingers and crying "troll" at everyone using a modicum of rhetoric, even if this is a phenomenon I might have contributed to. However, I suspect that the difficulty of using a form of functional programming without named parameters is a significant problem for many people. I certainly find it difficult enough, because it taxes my memory, despite my ability to remember to API docs, that I have been discouraged from exploring concatenative programming. It might be interesting to know if there are any patterns that can be used to fake named parameters while maintaining concatenability.
By Marcin Tustin at Thu, 2005-08-11 14:26 | login or register to post comments

Let's not start pointing fing


Let's not start pointing fingers and crying "troll" at everyone using a modicum of rhetoric, even if this is a phenomenon I might have contributed to. Making oneself look completely ignorant is rhetoric?
By Derek Elkins at Fri, 2005-08-12 17:53 | login or register to post comments

As O said, I interpreted his


As O said, I interpreted his comment that "one might conclude..." as hyperbole. I wasn't commenting on his other posts.
By Marcin Tustin at Sat, 2005-08-13 08:50 | login or register to post comments

Wow, an improvement on Java boilerplate?


Factor has an HTTP server with a continuation-based web framework, a GUI toolkit-in-progress, various development tools, and more. This is a richer level of functionality than offered by most languages discussed on this web site, so ironically, one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. If you expect to be taken seriously, you might want to tone down the rhetoric. I don't know what you're thinking of here, but this sounds completely false. Perhaps by "most languages discussed on this web site", you're thinking only of announcements of brand-new or toy languages? Re: "writing a Turing machine program"; after working with Factor's libraries for sequences and mathematics, going back to a language like Java with annoyingly hand-coded boilerplate loops seems more like a Turing machine. What I take from this is that you find Java so restrictive, that you're willing to put up with other serious limitations in concatenative languages to escape it. What about all the much more expressive applicative languages, though? Scheme, Haskell, ML? Should we take your comparison to Java as a tacit acknowledgement that concatenative languages can't really compete with more expressive applicative languages, except perhaps in certain specialized domains?
By Anton van Straaten at Wed, 2005-08-10 16:08 | login or register to post comments

What serious limitations?


I'd say Haskell and ML's lack of a live environment with in-image development tools is a serious limitation. I've tried Scheme and far prefer Common Lisp. I have never developed the same application in CL and Factor side by side, except for porting a few trivial things back and forth, so I cannot comment on the merits of CL versus concatenative languages, although I can say Factor's sequences library is cleaner and more generic than CL's.
By slava at Wed, 2005-08-10 16:18 | login or register to post comments

Repita despus de m ...


I'd say Haskell and ML's lack of a live environment with in-image development tools is a serious limitation. language implementation != language language != language implementation
By dhopwood at Thu, 2005-08-11 02:54 | login or register to post comments

Toplevel
I was just going to say that I quite enjoy O'Caml's toplevel(s). :-)
By Paul Snively at Thu, 2005-08-11 03:16 | login or register to post comments

Slipping?
Paul, I'm disappointed. You forgot to mention Tuareg mode . ;oP Of course, GHC and SML/NJ also have nice toplevels. I wouldn't be surprised to find IDEs for them, although I've never looked.
By Anton van Straaten at Thu, 2005-08-11 03:49 | login or register to post comments

Los sistemas de tipo


language implementation != language Yeah, but Haskell's and ML's type systems make it so that less things can be done at runtime, so development tools are more complicated and less useful. PLEEEASE don't get into a type system debate here, I just wanted to say that typing makes things more complicated.
By Daniel Ehrenberg at Thu, 2005-08-11 16:13 | login or register to post comments

hs-plugins certainly makes an


hs-plugins certainly makes an interesting tool in that context
By Philippa Cowderoy at Thu, 2005-08-11 16:41 | login or register to post comments

Not so much due to types


I'd say it's only secondarily due to their type systems. A more fundamental factor is their higher degree of purity compared to Lisp, Smalltalk and the likes, which makes "interactive update" a more alien concept.
By Andreas Rossberg at Sat, 2005-08-13 12:09 | login or register to post comments

Foreign?
I can't imagine writing any nontrivial program in a concatenative language; it seems like a tour de force comparable to writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices.

Borrowing an example from another thread ... class.method().attribute.method( obj.attribute, obj.method() ).getSomething(); ...Is it just me, or does that bear a striking resemblance to concatenative code?
By Greg Buchholz at Wed, 2005-08-10 17:34 | login or register to post comments

nombres
...Is it just me, or does that bear a striking resemblance to concatenative code? expressions are easy enough. Defining functions without having argument names or let bindings to name your terms gets you into drop swap dup rot shenanigans... That kind of stuff is for compilers, not programmers. Up with that, I will not put.
By James McCartney at Wed, 2005-08-10 18:54 | login or register to post comments

Re: names
So why do you troll in this thread?
By slava at Wed, 2005-08-10 19:05 | login or register to post comments

Stack scramblers
I like William Tanksley's stack scramblers idea for concatenative languages. They look like this:

swap = ab-ba drop = adup = a-aa

I can never remember which of rollup or rolldown does which of abc-cba or abc-bca. Stack scramblers make this perfectly clear. Factor uses stack scramblers nowadays also.

--Shae Erisson - ScannedInAvian.com


By shapr at Thu, 2005-08-11 11:37 | login or register to post comments

Umm ... no
Factor uses stack scramblers nowadays also. It does? I'm pretty sure it still uses good old dup, drop, swap, etcetera. I could never remember which ones those stack shufflers were -- until I started actually programming. Then, it was crystal clear.
By Daniel Ehrenberg at Thu, 2005-08-11 16:21 | login or register to post comments

The original Java Factor impl


The original Java Factor implementation had these stack shuffle words. The current C version no longer has them IIRC.

By doublec at Thu, 2005-08-11 22:14 | login or register to post comments

Re: Foreign?
Compare some typical OOP code in four languages: Common Lisp: (baz (bar (make-instance 'foo))) Java: new Foo().bar.baz(); Smalltalk: Foo new bar baz. Factor: <foo> bar baz Factor's concatenative code looks very similar to how Smalltalk unary messages are composed. Note that CL code has to be read right to left, which ironically is a common complaint against *postfix* languages.
By slava at Wed, 2005-08-10 19:02 | login or register to post comments

"kont"
means "ass" in dutch.
By Wouter at Wed, 2005-08-10 21:05 | login or register to post comments

As in "arse" or "donkey"?
As in "arse" or "donkey"?
By Marcin Tustin at Thu, 2005-08-11 14:15 | login or register to post comments

Arse
The dutch word "kont" is etymologically related to the english word "cunt". The original meaning of both words is "hole". ( Source: The Alternative Dutch Dictionary )
By Sjoerd Visscher at Thu, 2005-08-11 15:30 | login or register to post comments

TMI: Too Much Information


Though cuss words might be considered a valid form of study in the field of linguistics, their use in programming languages is rather unchartered. :-)
By Chris Rathman at Thu, 2005-08-11 15:44 | login or register to post comments

Bueno ...
we've had Brainfuck for some number of years now. Not to mention Linda, named after a porn star. I'm sure there are a few other (as yet undiscovered) dirty jokes and double entendres out there....
By Scott Johnson at Thu, 2005-08-11 21:45 | login or register to post comments

koders.com could be a start


Crap is the most popular cuss word, of the words I can think of right now. More than half of the occurences is in C code, but I don't know how much of the code in the database is in C.
By Sjoerd Visscher at Fri, 2005-08-12 08:39 | login or register to post comments

Time to get started then...


Perhaps we should get started on the research then... Just think of all the expressivity we could gain. Although as George Carlin has pointed out in one of his routines, at least one of the more popular words is already seriously overloaded ...

-30 By William D. Neumann at Fri, 2005-08-12 14:19 | login or register to post comments

Stack-based is point-free?
If you represent a stack as nested pairs, you can treat Forth words as unary functions of the form stack new_stack. Integer constants would have the type a. a a Int, and the arithmetic functions like +, -, etc., would be typed a. a Int Int a Int. With a type system, you can appeal to polymorphism to guarantee that functions won't interfere with elements beyond a certain depth in the stack. It's pretty easy to model this in a functional language (note: this is not a claim of superiority/inferiority). Here's a GHCi session: Prelude> let k = flip (,) Prelude> let lift2 f ((s,a),b) = (s, fab) Prelude> let plus = lift2 (+) Prelude> :t plus plus :: ((a, Integer), Integer) -> (a, Integer) Prelude> :t plus . ms plus . plus :: (((a, Integer), Integer), Integer) -> (a, Integer) Prelude> :t plus . k 4 plus . k 4 :: (a, Integer) -> (a, Integer) Prelude> :t plus . k 4 . k 2 plus . k 4 . k 2 :: a -> (a, Integer) Prelude> snd . plus . k 4 . k 2 $ () 6 What's interesting is that arrows in Haskell are defined in a point-free style, along with a special syntax involving local names which is translated into the point-free style. Por lo tanto, proc x -> y <- f -< x z <- g -< x returnA -< y + z se convierte en arr (\ x -> (x, x)) >>> first f >>> arr (\ (y, x) -> (x, y)) >>> first g >>> arr (\ (z, y) -> y + z)) which consists primarily of composition and "stack" manipulation. (Arrows pass multiple values around with nested tuples, but they don't prefer one side or the other, so can be more tree-like than stack-like.)
By Dave Menendez at Thu, 2005-08-11 23:55 | login or register to post comments

Joy in Haskell
Things get a little interesting when you get to the recursion combinators, but with a dash of typesystem wizardary , you can write most any Joy combinator in Haskell.
By Greg Buchholz at Fri, 2005-08-12 01:24 | login or register to post comments

The lambda calculus is the "i

The lambda calculus is the "internal language" of a cartesian closed category. This means that every type in the lambda calculus corresponds to an object of a CCC, and that every term corresponds to a morphism. In turn, this means that you can convert any program in the lambda calculus into an equivalent program consisting entirely of a composition of combinators (and you can do it in O(1) in either direction). If you like, you can think of the types of the set of free variables of a term as the type of the "input stack", and the return type as the type of the "output stack", which means that there is no fundamental expressivity gain to be had in shifting between a combinatory language and the lambda calculus. I think that being able to name variables explicitly is a genuine advantage, but obviously, mileage varies.
By neelk at Fri, 2005-08-12 19:32 | login or register to post comments

Вам также может понравиться