Академический Документы
Профессиональный Документы
Культура Документы
Lots of factoring
The trick is small, easily testable words. Any time you do lots of stack manipulations is a sign that you need to refactor. 'Thinking Forth' has lots of good advice in this area: http://thinking-forth.sourceforge.net Manfred von Thun's Joy pages also covers this in some of the articles. His one on the quadratic formula for example: http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-quadratic.html By doublec at Tue, 2005-08-09 23:28 | login or register to post comments
The structure of the Scheme code directly reflects the structure of the formula. Indeed, there are only trivial notational differences between the formula and the code, as we can see by writing the formula with the same auxiliary variables and in prefix notation:
quadratic-formula(a, b, c) def= (root1, root2) where minusb = - 0 b radical = sqrt (- (* b b) (* 4 (* a c)))) divisor = * 2 a root1 = / (+ minusb radical) divisor root2 = / (- minusb radical) divisor
Compare that with the 2nd version in Joy (the one suggested to be "the best of the three in this note"):
quadratic-2 == ] [ [ [ pop pop 2 * ] [ pop 0 swap - ] [ swap dup * rollup * 4 * - sqrt ] ] [i] map ] ternary i [ [ [ + swap / ] [ - swap / ] ] [i] map ] ternary. # a b c => [root1 root2 # divisor # minusb # radical # root1 # root2
** Huh? ** I don't think it's just my lack of familiarity with Joy that is at issue here: the code structure is simply quite different from the formula. It's impossible to understand
what this code is doing without thinking about how it uses a stack that is not present in the original problem (and that could map to the problem in many different ways). The difference that stands out most is that none of the variables are explicitly named -the names on the right are only comments. These comments are not equivalent to using names in the code -- what if they were wrong? The variables have to be referred to by keeping track of their position on the stack (which is what I was getting at in the reference to de Bruijn indices above). OK, it is possible to mitigate this problem by minimizing the size of the stack at any given time, but the risk of making an error is going to be larger for any given number of variables than it is in an equivalent program that uses names. Some proportion of those errors won't be found by testing. For a large program, that is likely to add up to an unacceptable risk. [Edit: corrected "0 - b" to "- 0 b"] By dhopwood at Wed, 2005-08-10 03:51 | login or register to post comments
proneness of the stack implicitly introducing variables without names) are not specific to that case. It is very easy to write such a language that compiles down to concatenative code which then runs through Factor's native compiler. Right -- I didn't say that concatenative languages aren't suitable as intermediate languages (although I wonder why you would use them in preference to a register-based IL, if you're targetting a register-based machine). I have found that the time going from noting a bug, to resolving it, to fixing it is very short in Factor. And in applicative languages as well, usually, if you notice the bug. ... jEdit is written in Java. Like all Java software, it is a pain to maintain, ... Did I inadvertently defend Java? By dhopwood at Wed, 2005-08-10 14:07 | login or register to post comments
Monads
Just embed a domain-specific language for infix arithmetic and be done with it. And then you're not writing the arithmetic in a concatenative language. Let's try rewriting that a little bit differently... Just use monads (like an embeded domain-specific language) for sequencing sideeffects and be done with it. And then you're not writing functions in a pure language.
I agree
I agree. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code, as though an expression such as:
(let ((x 33)) (foo) x)
...could return a value other than 33, if foo had a bug. This kind of non-local side effect for bugs is reminiscent of something like a pointer error. In other contexts, this sort of behavior is called "unsafe": that an abstraction is not protected from unauthorized outside interference. In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. By Anton van Straaten at Wed, 2005-08-10 15:48 | login or register to post comments
Incorrect
The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup What about names like let, labels, multiple-value-bind and setf in Common Lisp programs? They also have nothing to do with the problem domain. Is CL low level, "like assembler"? This kind of non-local side effect for bugs is reminiscent of something like a pointer error. In other contexts, this sort of behavior is called "unsafe": that an abstraction is not protected from unauthorized outside interference. Factor has a stack effect inferencer that could be used to check that foo does in fact return the right values. Suppose you write a function that takes a mutable object as input, and it violates its contract by mutating the object into an incorrect state; how is this any different from a function with an invalid stack effect? Are you going to argue in favor of purelyfunctional languages, which are solely the domain of academics?
Second, while I wouldn't use purely functional languages in every context, I do use languages that encourage and support the controlling of side effects, and I do write code which avoids side effects for the most part. This has numerous provable benefits if you want to start arguing against them, I'd point you here. Lest I be taken as entirely negative about concatenative languages, I should mention that I've written systems in Forth, experimented with Joy, and written a toy Joy interpreter. I think they're very interesting languages, and can be fun and useful in certain kinds of applications, but I don't think that those languages can support the kind of claims you seem to want to make. If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-thetop criticisms of other, proven languages. By Anton van Straaten at Wed, 2005-08-10 17:02 | login or register to post comments
I'm not sure what logic is used, because I don't follow it. Factor has variables and nested namespaces and scope. Did you even bother to look at it? Well, we need to be clear about whether we're discussing actual concatenative languages in general, or Factor in particular. The discussion started out more generally. I've looked briefly at Factor, some months ago, but I didn't find that it sufficiently addressed the sort of concerns I've mentioned. Since it's still under development, it'll be interesting to see how it turns out over time. Regarding my point about global names, I didn't state that clearly. What I was referring to is that these languages demonstrate that they recognize that supporting safe bindings between names and values is important, but within individual procedures, they don't provide such a feature, i.e. temporary name/value bindings that can be relied on for the life of a procedure. Does Factor provide such a feature? I could easily add some tools to do this automatically for every new word definition. That sounds as though it would be useful. One question is that as you add such features, might you reach a point where it becomes apparent that simply addressing the issue at the language level is a good idea? Lest I be taken as entirely negative about concatenative languages, I should mention that I've written systems in Forth, experimented with Joy, and written a toy Joy interpreter. So your only experience with concatenative languages is having written a toy Joy interpreter? Is it your intent to get into a definitional argument now, and say that Forth is not concatenative? If so, then I'd suggest we abandon the poorly-defined term "concatenative" and switch to something like "direct stack manipulation languages", which is more meaningful in any case. I claim that the combination of my experiences with Forth and Joy, combined with knowledge of programming language semantics and experiences with a variety of other languages, makes it possible for me to make comparisons and draw conclusions related to certain properties of these languages. Show where I have been "blustering" and making over-the-top criticisms. I've already responded to specific cases, but some of the quotes in question are as follows: This is a richer level of functionality than offered by most languages discussed on this web site one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. Are you going to argue in favor of purely-functional languages, which are solely the domain of academics? By Anton van Straaten at Wed, 2005-08-10 19:55 | login or register to post comments
You see, the very first post to this thread was your disparaging claim that concatenative languages are unsuitable for developing applications. Instead of discussing the Kont language, you decided to troll instead. You don't even bother refuting my points, and instead draw blanket conclusions and generalizations based on your limited experience with Joy and Forth. I know you wouldn't be very happy if I went and jumped into threads where discussion of ML and Haskell is taking place, claiming that static typing is overly restrictive. So why do you do the equivalent thing here? By slava at Wed, 2005-08-10 22:29 | login or register to post comments
Substance, please
You see, the very first post to this thread was your disparaging claim that concatenative languages are unsuitable for developing applications. I responded to another post which made a point which I agreed with. I said concatenative languages show a particular sign of being a low-level language. I believe that's true, and I've defended that in exchanges with others. I'm not even sure if you disagree, since you haven't addressed my points, but I assume from your general reaction that you disagree. Instead of discussing the Kont language, you decided to troll instead. I'm not trolling. However, I noticed that instead of discussing the Kont language yourself, you decided to talk about Factor. I'm having difficulty discerning the rules you think I should be following here. You don't even bother refuting my points Which points are those? Afaict, I've responded to every one of your substantive points. I suggest you reread the thread more carefully. and instead draw blanket conclusions and generalizations based on your limited experience with Joy and Forth. You're free to dispute my points, with rational arguments, but lashing out at academia, applicative languages, "most languages discussed on this web site", and research languages doesn't qualify. It's particularly ironic given that your own language would, by some definitions, qualify as a research language. I know you wouldn't be very happy if I went and jumped into threads where discussion of ML and Haskell is taking place, claiming that static typing is overly restrictive. Heh. Your assumption is incorrect, assuming that what you have to say has more basis in rational argument than your comments on those languages so far. So why do you do the equivalent thing here?
This is a forum for discussion of programming languages. I'm interested in the semantics of programming languages, and among other things, how those semantics relate to the user experience of programming. My original post in this thread wasn't intended as a critique of Kont specifically (and besides, Brian Mastenbrook has already heard my opinion on this issue). Rather, it was an attempt to characterize one of the semantic issues that distinguishes concatenative languages from languages with namebased procedure-local abstractions (a.k.a. local variables). I believe that this issue is one of the factors which underlies a common negative user reaction to these languages. As someone interested in concatenative languages, I would think you might have some interesting thoughts on the matter, but for whatever reason, you haven't chosen to offer those yet. Other people have, though, which is the reason I'm here. By Anton van Straaten at Thu, 2005-08-11 00:20 | login or register to post comments
I said concatenative languages show a particular sign of being a low-level language. I don't think concatenative languages are more low-level. They just handle things differently, in a way that other languages happen to use as an intermediate form. This does not mean at all that it is worse, just different. If a C compiler uses an intermediate form that is single assignment only, does that mean that C higher-level than any kind of single assignment form? Of course not.
Rather, it was an attempt to characterize one of the semantic issues that distinguishes concatenative languages from languages with name-based procedure-local abstractions (a.k.a. local variables). I believe that this issue is one of the factors which underlies a common negative user reaction to these languages. How are local variables abstractions? Could you explain that? How are they more abstract and flexible than stack shuffling operators? By Daniel Ehrenberg at Thu, 2005-08-11 17:33 | login or register to post comments
course, this can lead to changing preferences, programming style, etc mine have certainly changed a great deal over the years. But more importantly, a good understanding of PL features and how they affect programs and programming can make you a better programmer in any language. It can also help in assessing the appropriateness of a language in a given situation, and help with cutting through hype about particular programming languages. I don't think concatenative languages are more low-level. They just handle things differently, in a way that other languages happen to use as an intermediate form. This does not mean at all that it is worse, just different. That particular issue is different in a way that's more low-level. This has been covered in a few posts, and connected to Alan Perlis' observation that low-level implies that "programs require attention to the irrelevant". The irrelevant thing in this case is direct stack manipulation, which you've acknowledged: "there are only a few cases where the stack really matters". You've suggested elsewhere that the same is true of lexical names, but I've addressed this in my response to your last point below. If a C compiler uses an intermediate form that is single assignment only, does that mean that C higher-level than any kind of single assignment form? Of course not. C code is certainly usually higher-level than any internal single-assignment form it's likely to be compiled to. The way you can tell that is to look at the extra operations that are being performed in the single-assignment code, to deal with the constraints of single assignment the fact that those extra operations aren't necessary in the C source indicates that they are irrelevant to the expression of the original problem, and indicates that the single-assignment form is lower level. If your concern is with generalization, i.e. drawing conclusions about "...any kind of single assignment form", that's not what I'm doing. I'm looking at specific languages that regularly embed direct stack manipulation operators in their code, and concluding that this is a lower-level way of dealing with abstraction over values than using names, in most cases. How are local variables abstractions? Could you explain that? How are they more abstract and flexible than stack shuffling operators? A local variable is an abstraction in the same way that any name is an abstraction. The name refers to some other thing, and we can use the name and manipulate whatever it refers to without knowing which specific thing it refers to. That's abstraction. When you push a value on a stack and later apply some operation to that value, that's also a kind of abstraction. However, not all abstractions are equal. Direct stack manipulation alone doesn't scale very well, as every concatenative programmer already knows. As procedures get bigger, keeping track of values via stack manipulation gets more complex. Early on in the thread, we were reminded that "any time you do lots of stack manipulations is a sign that you need to refactor." And what do you do when you refactor? You create names, which (implemented properly) are more scalable, safer abstractions than storing values on a stack.
So one question is, if names are so useful and important when code scales up past a single procedure, mightn't they be useful locally, within a procedure? And of course, many languages demonstrate that they are. One of the critical features of names is that they can actually serve two purposes: 1. A name acts as a way to refer to a value, i.e. a name is a "reference", or can be said to "denote" some value. 2. A name can provide information to programmers about the value, or its role in the computation, i.e. a name can also have a "sense" or "connotation". This why we try to pick informative names, and why the expression you gave elsewhere in this thread was compiling dataflow optimize linearize simplify generate, instead of a b c d e f. Defined appropriately, both expressions could have the exact same computational effect, but one is much more meaningful to us than the other. When we omit names from our code, we're omitting one of the only things which connects the code to the problem it's solving, and that's something that should be done thoughtfully. If a language commonly forces you to omit names, that's something worth taking a close look at. Using a stack to abstract values hardly ever serves the second purpose above. However, naming every value isn't necessary or appropriate, either. In the expression quoted above, it's not necessary to name the intermediate values, and most languages support this sort of omission of names in one form or another. But omitting names is one thing; replacing them with operators that manipulate an abstraction (the stack) that's not otherwise directly relevant to the computation is something else entirely. Using stack shuffling operators is a low-level detail no matter how you look at it, and wherever this replaces the use of names which might be informative, you have a double loss of the program's ability to express its algorithm to the human reader. In general, it's difficult to argue that expressing most problems in terms of an essentially LIFO stack is an approach that makes sense in and of itself. The most convincing arguments, I think, have to do with the benefits of that approach to language implementation: by pushing some of the work of arranging the abstraction mechanism onto the user, the job of the language implementation is simplified, which can have various benefits. I'm sure there's plenty in what I've written above for a concatenative fan to object to. Keep in mind I've programmed in these languages, and I'm familiar with the arguments. I could make them myself: you don't need local names because factoring keeps procedures small enough to be tractable; non-local names used in conjunction with the stack locally works out very well most of the time; languages which force you to name e.g. function return values just to be able to rearrange them to pass to another function are similarly obscuring the solution to the problem; etc. etc. My main interest is in characterizing the differences in a more specific way than just saying something along
the lines of "I don't like this", or "I think this is a fun way to program". There are obviously differences, but if we can't describe their effect clearly other than just listing the technical features, that implies a gap in our understanding. By Anton van Straaten at Fri, 2005-08-12 03:55 | login or register to post comments
2Variable
In #comment-8723, Anton wrote: So one question is, if names are so useful and important when code scales up past a single procedure, mightn't they be useful locally, within a procedure? And of course, many languages demonstrate that they are. I'm not sure if the following quote supports your contention. I ran into it while skimming through Forth Sorts Fruit Down Under by Paul Frenger in ACM SIGPLAN Notices, vol. 40, no. 8, aug 2005 (not yet online): The CF [ColdForth] manual has other interesting chapters to describe its features: numbers/parameters (CF supports local variables via stack frames), the memory manager, I/O handling (mostly for accessing files), strings, object-oriented extensions, TCP/IP capability and the multi-tasker, for example. By el-vadimo at Wed, 2005-09-14 12:11 | login or register to post comments
2drop
Even concatenative languages acknowledge that names are important, since they all support global names. What logic is used to justify the idea that you can only have global names? It boils down to something like "because it's easy to implement a language that way, and possible to get away with it and work around it". That's not a good argument from the language user's perspective. It might explain why you, as an implementor of such a language, are enthusiastic about them, though. Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. However, it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J?
Simple refutation: Erlang is used to run telecom switches, and it's a purely functional language. Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-the-top criticisms of other, proven languages. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. By Mackenzie Straight at Wed, 2005-08-10 23:38 | login or register to post comments
Stacks of responses
Aside from the fact that several concatenative languages do actually support local variables Do those local variables protect against the issue I've raised, i.e. are they safe from being changed by unrelated stack operations? the reason they're frowned upon is because in most cases they are an impediment to proper factoring. Is that an issue specific to the concatenative languages? Otherwise, I find it hard to see how names are an impediment, in languages with traditional local variables. Can you give an example? In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. I don't see how using "push" and "pop" to abstract values helps address that, in most cases. There are cases where a stack is certainly the right abstraction to use, of course. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. I didn't say that. I said that "it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects." I would include many imperative languages in that list, including e.g. C++, Java, Ada languages which provide qualifiers like "const" that allow the programmer to guard in a meaningful way against unwanted side effects, not just on individual variables but also over larger scopes, such as declaring that a method cannot side-effect the object it's associated with. Controlling of side effects is a major theme in very demanding systems, not just at the language level but in the design of systems. Functional languages just implement this at the language level on a more comprehensive basis. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? Hopefully my response above gives a better idea of what I was getting at. Obviously, "power" is one of those things, like "expressiveness", which is difficult to formalize satisfactorily, but it could perhaps be characterized as the ability to handle very demanding tasks, preferably demonstrated in practice. The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J?
I consider tacit programming which has strong similarities to point-free style to be higher-level than direct stack manipulation, for the reasons stated elsewhere in this thread. However, all of these things have pros and cons. The cons tend to have to do with being more difficult to reason about, especially as expressions scale up. Ideally, a language should be able to support a variety of styles, so that the appropriate one can be picked in a given situation. Of all of these, I don't see direct stack manipulation as one that's likely to be the best choice in most situations. On the subject of scaling up expressions, the traditional concatenative answer to that is that you should factor larger expressions into multiple words. However, that means you're relying on names for abstraction again, just on a scale greater than the local procedure. How does this reconcile with the idea that using local names is an impediment to refactoring? Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). Define it however you like very functional? Do you consider Haskell purely functional, even with the I/O monad and the things it can talk to? My real point was that a strong emphasis on avoidance of side effects is not something that's solely the domain of academics. I consider the definition of "purely" here to be a quibble e.g. anything that interacts with a traditional database is ultimately impure, when considered as a whole system. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. That's a fair point, although if you focus purely on how local value abstraction works, I can see his point. Having the option to use de Bruijn indices might be an improvement over having no alternative to direct stack manipulation, and I don't think it's over the top to say that. Also, the original poster did also make some other real points. By Anton van Straaten at Thu, 2005-08-11 01:18 | login or register to post comments
What's in a name?
Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. Programmers, and typical programming styles in various languages, differ considerably in how many named intermediate variables they use (personally I find the number of intermediates in the above Scheme code a bit excessive). But it's generally agreed, I think, that introducing variables with useless names is bad style. In functionalapplicative languages, it's almost always possible to avoid this. The need for temporaries with useless names, e.g. loop indices, in imperative-applicative languages (some more than others) is indeed a valid criticism of them. By dhopwood at Thu, 2005-08-11 01:26 | login or register to post comments
The need for temporaries with useless names, e.g. loop indices, in imperativeapplicative languages (some more than others) is indeed a valid criticism of them. I disagree that temporaries have "useless names". Loop indices may not be referenced other than as a control structure, but part of the essence of reasoning within an imperative paradigm is knowing the current state. Not being able to name part of that state is what I would consider to be a weakness. The only useless names I can think of is for function arguments that aren't used. And in better languages, you don't actually have to name those. ;> By David B. Held at Thu, 2005-08-11 03:52 | login or register to post comments
Types?
I agree. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code That sounds like a plea for static typing. ...every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? Especially in the presence of macros?
(apply quadratic-formula '(1 0 -4)) (apply quadratic-formula (reverse '(1 0 -4))
More Linear
For better reference to linear lanugages (which also mentions the quadradic formula!) see Linear Logic and Permutation Stacks--The Forth Shall Be First By Greg Buchholz at Wed, 2005-08-10 17:44 | login or register to post comments
Differences
Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lower-level. That sounds like a plea for static typing. Yes, it is. The question is what the costs of introducing it are. The costs of introducing a safe name abstraction are negative. The costs of introducing static typing are positive, and arguably not insignificant. In the static typing case, I find the costs high enough that I can easily justify using it in some cases, and avoiding it in others. In the case of relying primarily on direct interaction with a stack to store values, I can easily justify avoiding that in most cases. Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? It's different at least in the sense that that's a local effect: the error is confined to the interface between the caller and the callee, and doesn't compromise unrelated abstractions elsewhere in the program. I'm not saying that any language with named bindings solves all such problems, I'm just saying that there's a spectrum of safety and levels of abstraction, and that a language with named bindings which can make some guarantees about those bindings is both safer and higher-level than a language that doesn't have those features. By Anton van Straaten at Wed, 2005-08-10 18:10 | login or register to post comments
Well I don't know if I can define "low-level" either, but of the cuff, I'd point to higher order functions and a clean algebra of programs as indicators of higher-levelness. Maybe the assembly language remark was aimed more at Forth and less at something like Joy? By Greg Buchholz at Wed, 2005-08-10 18:37 | login or register to post comments
"Low level"
I'll admit I can't give a formal definition of "low-level" off the top of my head There's always Alan Perlis's definition: A programming language is low level when its programs require attention to the irrelevant. Although I don't think it's really completely correct to label languages high or low level. At most you can say that writing in a given language generally results in high or low level code. Anyway, by that definition, concatenative code is low-level because it forces you to write your solution in terms of a stack, despite the fact that there's no stack in your problem specification. Likewise, monads result in low-level code, because you're forced to think about category theory just to print "hello" :-)
Analogy...
In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. Sounds a lot like type errors in dynamically checked languages... But now I guess I'm the one trolling... ;) By Matt Hellige at Wed, 2005-08-10 17:31 | login or register to post comments
empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated. By Anton van Straaten at Wed, 2005-08-10 20:50 | login or register to post comments
Static analysis
The Factor language apparently includes a stack effect inference tool which performs such analysis. The documentation mentions combining unit testing with stack effect inferencing to check the stack effect of procedures (words). This seems like a lot of trouble to go to to achieve a kind of safety that other languages get for free. I'm not saying that this issue alone makes concatenative languages unfit for human consumption. However, I think the negative reaction to the tractability of these languages that's been expressed by some posters in this thread is not an uncommon one, and I've attempted to identify a couple of the semantic issues which underly this reaction. In this particular case, I'm saying that even if you get past the lack of procedure-local name/value bindings, that the approach these languages take to local value abstraction is less safe and requires more care than languages with name-based local abstraction. By Anton van Straaten at Wed, 2005-08-10 22:35 | login or register to post comments
empty stacks
Likewise, executing pop on an empty stack likewise produces
Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated.
There are no problems at all about empty stacks. Factor solves the stack underflow problem by mprotecting (or something like that) the memory directly underneath the
stack, triggering a segfault if there's underflow. Should this happen, the segfault can be caught like any other exception, and it the toplevel of the repl, it's caught automatically and printed out, also like any other exception. All this stuff about is useless outside of research. An infinite loop has a very different outcome than stack underflow. By Daniel Ehrenberg at Thu, 2005-08-11 17:50 | login or register to post comments
Here is another example of non-local bugs: a badly-formed HTML post in a naive weblog system can break the whole page. Yes. No-one in their right mind would defend such a system. ;) By Anton van Straaten at Thu, 2005-08-11 17:17 | login or register to post comments
which seems clear enough, apart from the RPN. disclaimer - i can't remember the exact details of otuto v1, and v2 is stalled until i gain enough patience to return to the uml tool (i'm trying to implement it in c++ for various (vaguely financial) reasons). By andrew cooke at Wed, 2005-08-10 16:50 | login or register to post comments
concrete example
OK perhaps you can have hidden lexical scope pointers and use a cord. But this seems to create a pretty confusing language. For a simple example suppose we have a Joy-like language plus an operator 'let' that takes a value, a quoted symbol, and another quotation which is evaluated in an environment where that symbol is bound to the value.
2 [x] [x 1 +] let
this returns the concatenated function: however each x is bound to a different value.
[x x +] i
returns 3 By James McCartney at Thu, 2005-08-11 19:13 | login or register to post comments
there's some syntactic sugar involving implicit quoting that means you can write that as
{ dup | x } => { x x }
but that's beside the point. all that the otuto definitions describe is how you rearrange the stack(s). you could implement them by cross-compiling to an intermediate representation without variable names (using swap, roll etc etc) (i believe - i've never bothered to prove it, but i would be amazed if it were not true). i'm not 100% convinced i understand what concatenative is, so i can't say otuto is concatenative, but i'm pretty sure that those variable names are just sugar. a long time back, when i mentioned this on the joy email list, someone added something similar to another language there. By andrew cooke at Fri, 2005-08-12 22:01 | login or register to post comments
otuto
As I understand it, if a language is concatenative then functions are sequences of operators where you can take any function and split it at any point and you have two valid functions, or you can join two functions and have a valid function. I'm not sure what it would mean to split or join functions in otuto, since they are rules with 4 separate sections, not simple sequences of operations. By James McCartney at Sat, 2005-08-13 16:32 | login or register to post comments
well, the rules are restricted - they always match a single value on the left hand stack. so the top value on that stack can be interpreted as the current instruction. if you then take the right hand stack as "values" then you can think of a rule as saying how to apply the intruction (top of the left stack) to the values. ok, now if you continue with that, a definition like:
{ foo | a } => { bar baz | a a }
is equivalent to defining foo as bar baz dup in a concatenative language. now if you accept, as i argued above, that the variables are just syntactic sugar, then that implies, i think, that there is a simple mapping from otuto to concatenative languages. (this breaks down with meta-programming, though, since you can change the assumptions made above). By andrew cooke at Sun, 2005-08-14 01:43 | login or register to post comments
concatenative is ..
A language in which the concatenation of programs denote the composition of functions has been called a concatenative language. I first heard this terminology from Billy Tanksley. -- Manfred von Thun otuto is concatenative. so is XY. Here's one version of quadratic in that language, which uses names to similar effect (that is, to map items from the stack to multiple locations in the body of the code):
; quadratic { [a b c] b -: a c * 4 * b 2 ^ -. .5 ^ [1 -1] * + a 2 * % } ;
regarding your proposal for cross-compilation, i think it would be sufficient to compile programs like quadratic, in which names are used to map items from the stack to multiple locations, to programs which consist of an initial reordering of items on the stack, followed by code which is name-free. Billy Tanksley has suggested a notation (supported in XY) in which an initial "shufflepattern" is followed by name-free code. For example, compile
{ [a b c] a b * a c * + }
to
abc--abac * [*] dip +
[edited to show an example of compilation from pattern-notation to shuffle-notation] By sa at Sat, 2005-08-13 23:37 | login or register to post comments
Squinting
I don't think squinting is necessary at all. In fact, lately I've been thinking that there might be a benefit for newbies by writing a monad tutorial using a language like Joy. Subroutines (quotations) in Joy are lists of functions and you interpret them by folding in a 'compose' and applying to a stack...
fold compose id [1 4 dup * +]
Replace 'compose' with 'bind' for your favorite monad, and I think you might be on your way to removing much of the mystery behing monads (or at least you've given them an opportunity to see monads that aren't tied up in Haskell syntax and type-classes). By Greg Buchholz at Mon, 2005-08-15 14:25 | login or register to post comments
Looks applicative to me
I don't know whether otutu as a whole is applicative or a hybrid, but the above code is definitely applicative. (RPN vs prefix vs infix is a separate, and much less significant, issue.) By dhopwood at Thu, 2005-08-11 01:48 | login or register to post comments
[...] an HTTP server with a continuation-based web framework, a GUI toolkit-inprogress, various development tools, and more [...] is a richer level of functionality than offered by most languages discussed on this web site Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. ... one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers. This is just trolling. By dhopwood at Wed, 2005-08-10 14:31 | login or register to post comments
Irrelevant
Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. "Research language" is a euphemism for "useless language". I'm really only interested in practical concerns. I write code, and I like to write stable code quickly, and I find that concatenative languages let me do this just as easily as applicative ones, with the additional advatange that writing tools is easier. You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. By slava at Wed, 2005-08-10 16:08 | login or register to post comments
For me...
You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. Perhaps. However, the next person who has to deal with your code may indeed have a problem. I could truthfully say "for me, dealing with unrestricted pointers in C is not a problem", but the broader consequences of everyone holding that opinion are not pretty. By Anton van Straaten at Wed, 2005-08-10 16:13 | login or register to post comments
Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. I, unlike Slava, am OK with research languages. They're good for research. As lothed as Slava is to admit it, Factor isn't really suitable for most applications yet. Of course, it is intended for use, but Factor's libraries aren't mature enough, especially crucial ones like the GUI library. Factor has a pretty good FFI, but that's not because it's concatenative. We really shouldn't discuss this, though, you're right.
This is just trolling. [about applicative languages being nonproductive] Yeah By Daniel Ehrenberg at Wed, 2005-08-10 19:38 | login or register to post comments
manipulation of the stack, and in what situations. For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages. BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, i.e. both the entry and exit from a traced function. By Anton van Straaten at Wed, 2005-08-10 22:48 | login or register to post comments
Yes, I know that's a terrible argument, but I don't care. It's a reason I use Factor anyway. At this point, rather than feeling that concatenative languages are backwards, I feel like they're in the exact right order. They do reflect the order of evaluation, however important or unimportant that is.
For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages. Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative.
BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, i.e. both the entry and exit from a traced function. Not only does it take more code to do such a simple task but you have to alter the
original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ word-name watch). One thing about Factor that's great but unrelated to its concatenativity is that you can mutate stuff like word bodies, though you don't normally do this outside of debugging. There's probably an applicative language that does that. By Daniel Ehrenberg at Thu, 2005-08-11 02:24 | login or register to post comments
Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. It'd be interesting to see a Factor (or Kont?) version of that program, for comparison. Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative. Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities.
Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ word-name watch). The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. By Anton van Straaten at Thu, 2005-08-11 04:04 | login or register to post comments
Concatenate
If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, e.g. in Scheme:
(concatenate input compiling dataflow optimize linearize simplify generate)
How is this any different from embedding an infix language in Factor? The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. But the implementation of Common Lisp's trace is more complicated than the Factor equivalent. By slava at Thu, 2005-08-11 04:45 | login or register to post comments
(concatenate input compiling dataflow optimize linearize simplify generate) I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp...
Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. But then there are properties of Kont and Factor (but not Forth) that lend
it to more abstraction, like the use of combinators to replace macros in languages like Lisp and built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation, and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though.
The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace procname). No alteration of the original code is required. Really? Oh, ok, then never mind that point. I don't have much experience with Lisp development tools. By Daniel Ehrenberg at Thu, 2005-08-11 16:07 | login or register to post comments
Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. Named variables are certainly not irrelevant to abstraction. Again, see my other reply. But then there are properties of Kont and Factor (but not Forth) that lend it to more abstraction, like the use of combinators to replace macros in languages like Lisp and
built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation, and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though. I agree, those kind of high level features (borrowed from functional programming! ;) are useful, and certainly help. However, they work just as well in the presence of lexical names. (There's a difference in the way inlining tends to work, but the end result is similar.) By Anton van Straaten at Fri, 2005-08-12 04:55 | login or register to post comments
Infected
Uh-oh, you're showing signs of being infected by stack-oriented thinking, What a great way to refute somebody's point! This is just like putting your hands over your ears, and saying "neener, neener, I'm not listening". Perhaps you've been infected by Scheme-oriented thinking? By slava at Fri, 2005-08-12 18:15 | login or register to post comments
I started to implement that Swine Before Perl program, but after doing the non-macro part, which came out a bit cleaner than the Scheme program IMHO, I really felt like this is enough. What benefit do you derive from making that a macro? In Factor, syntax extension isn't as common as in Scheme because it's not needed as much, and in Kont, there is no syntax extension. If you're making that macro for a speed boost, that's just showing that you have a bad implementation of the language. I still made a seperate syntax for defining the state machine, though. The code is below.
IN: state-machine USING: kernel hashtables sequences lists ; SYMBOL: init SYMBOL: loop SYMBOL: end : c[ad]+r #! Initial datastructure {{ [[ init {{ [[ CHAR: c loop ]] }} ]] [[ loop {{ [[ CHAR: a loop ]] [[ CHAR: d loop ]] [[ CHAR: r end ]] }} ]] [[ end {{ [[ CHAR: r end ]] }} ]] }} ; : walk-states ( seq states -- ? ) init swap rot [ >r ?hash r> swap ?hash ] each-with end = ; : {| POSTPONE: {{ : :: POSTPONE: {| : -> ; parsing : || POSTPONE: ]] : ;; POSTPONE: ]] : |} POSTPONE: ]] swons ; parsing POSTPONE: [[ ; parsing ; parsing POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: }} \ walk-states
c a d r r
;; || || ;; |} ;
Is there something wrong with that code? It seems to be a bit shorter than the Scheme code. Edit: as Slava correctly pointed out on the #concatenative channel, those cosmetic macros are completely stupid and the first version is really the one that should preferably be used. By Daniel Ehrenberg at Thu, 2005-08-11 19:43 | login or register to post comments
btw
here's how to activate it
"cadar" final-machine
or
"cadar" c[ad]+r walk-states
i.e.
k0:matrix[index;indices 0] k1:matrix[k0;indices 1] k2:matrix[k1;indices 2] :
e.g.
(1 1 1 1 1 1 m'("car";"cdr";"cadr";"cddr";"cdar";"caddr") 2 2 3 2 2 3 2 2 2 3 2 2 2 3 2 2 2 3 2 2 2 2 3)
a 0 in the return vector indicates that m has entered the error state. on my pentium 4, 3 ghz machine timings are:
10k elements: 0 ms 100k elements: 15 ms
1m elements: 187 ms
state-machine is just a special case of index-scan, which in turn is a special case of scanning a function on its input. if you understand +, and you understand summation as repeated application of +, and you understand indexing, then it's a small step to knowing how to code up a state-machine. By sa at Fri, 2005-08-12 15:52 | login or register to post comments
Repent, sinners
#include <stdio.h> int main(int argc,char**argv) { char S[4][256]={{0},{['c']=2},{['a']=2,['d']=2,['r']=3},{['r']=3}},s=1; if(argc
Understandable? Flexible?
That's great guys, unfortunately, it seems to miss the point fairly completely. Recall, Anton's pointer to the talk said (emphasis mine): Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable. Neither the C, K, nor Factor code is quite as understandable as that presented in slide 38 of Shriram's talk. What's more (and probably more important), is that neither is as flexible as what he presents either. What he's shown[1] is a simple syntax extension for generating (fairly efficient) automata from these simple, clear descriptions via macros. Given the spec for another automata, your solutions would have you hard-coding your new state tables while Shriram's would have you quickly done with the code and onto some other problem. [1] At least that's what I believe he's shown... I'm only working with the slides, and I know just enough Scheme to get myself in trouble -- feel free to taunt me if I'm wrong. One of these days I'll have to get over the Lisp allergy I picked up when I was a wee lad and really give Scheme a good look. -30By William D. Neumann at Fri, 2005-08-12 17:50 | login or register to post comments
Flexible?
The Factor code that Dan showed is equivalent to the Scheme code. I'd like you to explain why you think that this:
(init : (c (more : (a (d (r (end : (r -> -> -> -> -> more)) more) more) end)) end))
The Factor version has a bit more punctuation, because Dan did not extend the syntax; he just used Factor's hashtable literals. Also he uses character literals instead of symbols. And for what its worth, I consider the omission of hashtables from the Scheme standard to be an unacceptable limitation. I don't like the way typical Scheme code is written, using lists (including alists) and symbols for everything. It might look pretty but it just doesn't scale as well as other data structures for most tasks.
More confusion?
Unless I'm confused (which happens on a regular basis), I believe you're wrong. The Factor equivalent of this (from pg. 38)...
(automaton init (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end)))
...looks like...
: final-machine {| init :: CHAR: loop :: CHAR: CHAR: CHAR: end :: CHAR: c a d r r -> -> -> -> -> loop loop loop end end ;; || || ;; |} ;
...not a lot of difference to my eyes. By Greg Buchholz at Fri, 2005-08-12 18:00 | login or register to post comments
understandable. flexible.
i don't think i missed either point. for a machine with n states, use an n x 256 matrix m. for each (state : input -> nextstate), assign
m[state;input]:nextstate
to run on input i:
r:1 m\i
setting up a particular machine can be done line-by-line, as in the scheme example, or it can be done all at once, as my example shows. no macros, no special syntax. if state-machines are matrices, and recognition is recursive two-dimensional indexing, then implement it as such, as directly as possible. By sa at Fri, 2005-08-12 18:55 | login or register to post comments
Simplicity
This particular task does not even call for a state machine at all:
: swine ( sequence -- ? ) "c" ?head >r "r" ?tail r> and [ "ad" contained? ] [ drop f ] ifte ;
As O said, I interpreted his comment that "one might conclude..." as hyperbole. I wasn't commenting on his other posts. By Marcin Tustin at Sat, 2005-08-13 08:50 | login or register to post comments
language implementation != language language != language implementation By dhopwood at Thu, 2005-08-11 02:54 | login or register to post comments
Toplevel
I was just going to say that I quite enjoy O'Caml's toplevel(s). :-) By Paul Snively at Thu, 2005-08-11 03:16 | login or register to post comments
Slipping?
Paul, I'm disappointed. You forgot to mention Tuareg mode. ;oP Of course, GHC and SML/NJ also have nice toplevels. I wouldn't be surprised to find IDEs for them, although I've never looked. By Anton van Straaten at Thu, 2005-08-11 03:49 | login or register to post comments
Type systems
language implementation != language Yeah, but Haskell's and ML's type systems make it so that less things can be done at runtime, so development tools are more complicated and less useful. PLEEEASE don't get into a type system debate here, I just wanted to say that typing makes things more complicated. By Daniel Ehrenberg at Thu, 2005-08-11 16:13 | login or register to post comments
Foreign?
I can't imagine writing any nontrivial program in a concatenative language; it seems like a tour de force comparable to writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices. Borrowing an example from another thread...
class.method().attribute.method( obj.attribute, obj.method() ).getSomething();
...Is it just me, or does that bear a striking resemblance to concatenative code? By Greg Buchholz at Wed, 2005-08-10 17:34 | login or register to post comments
names
...Is it just me, or does that bear a striking resemblance to concatenative code? expressions are easy enough. Defining functions without having argument names or let bindings to name your terms gets you into drop swap dup rot shenanigans... That kind of stuff is for compilers, not programmers. Up with that, I will not put. By James McCartney at Wed, 2005-08-10 18:54 | login or register to post comments
Re: names
So why do you troll in this thread? By slava at Wed, 2005-08-10 19:05 | login or register to post comments
Stack scramblers
I like William Tanksley's stack scramblers idea for concatenative languages. They look like this:
swap = ab-ba drop = adup = a-aa
I can never remember which of rollup or rolldown does which of abc-cba or abc-bca. Stack scramblers make this perfectly clear. Factor uses stack scramblers nowadays also.
--Shae Erisson - ScannedInAvian.com By shapr at Thu, 2005-08-11 11:37 | login or register to post comments
Umm... no
Factor uses stack scramblers nowadays also. It does? I'm pretty sure it still uses good old dup, drop, swap, etcetera. I could never remember which ones those stack shufflers were -- until I started actually programming. Then, it was crystal clear. By Daniel Ehrenberg at Thu, 2005-08-11 16:21 | login or register to post comments
Re: Foreign?
Compare some typical OOP code in four languages:
Common Lisp: (baz (bar (make-instance 'foo))) Java: new Foo().bar.baz(); Smalltalk: Foo new bar baz. Factor: <foo> bar baz
Factor's concatenative code looks very similar to how Smalltalk unary messages are composed. Note that CL code has to be read right to left, which ironically is a common complaint against *postfix* languages. By slava at Wed, 2005-08-10 19:02 | login or register to post comments
"kont"
means "ass" in dutch. By Wouter at Wed, 2005-08-10 21:05 | login or register to post comments
As in "arse" or "donkey"?
As in "arse" or "donkey"? By Marcin Tustin at Thu, 2005-08-11 14:15 | login or register to post comments
Arse
The dutch word "kont" is etymologically related to the english word "cunt". The original meaning of both words is "hole". (Source: The Alternative Dutch Dictionary) By Sjoerd Visscher at Thu, 2005-08-11 15:30 | login or register to post comments
Well...
we've had Brainfuck for some number of years now. Not to mention Linda, named after a porn star. I'm sure there are a few other (as yet undiscovered) dirty jokes and double entendres out there.... By Scott Johnson at Thu, 2005-08-11 21:45 | login or register to post comments
Stack-based is point-free?
If you represent a stack as nested pairs, you can treat Forth words as unary functions of the form stack new_stack. Integer constants would have the type a. a a Int, and the arithmetic functions like +, -, etc., would be typed a. a Int Int a Int. With a type system, you can appeal to polymorphism to guarantee that functions won't interfere with elements beyond a certain depth in the stack. It's pretty easy to model this in a functional language (note: this is not a claim of superiority/inferiority). Here's a GHCi session:
Prelude> let k = flip (,) Prelude> let lift2 f ((s,a),b) = (s, f a b)
Prelude> let plus = lift2 (+) Prelude> :t plus plus :: ((a, Integer), Integer) -> (a, Integer) Prelude> :t plus . plus plus . plus :: (((a, Integer), Integer), Integer) -> (a, Integer) Prelude> :t plus . k 4 plus . k 4 :: (a, Integer) -> (a, Integer) Prelude> :t plus . k 4 . k 2 plus . k 4 . k 2 :: a -> (a, Integer) Prelude> snd . plus . k 4 . k 2 $ () 6
What's interesting is that arrows in Haskell are defined in a point-free style, along with a special syntax involving local names which is translated into the point-free style. Thus,
proc x -> y <- f -< x z <- g -< x returnA -< y + z
becomes
arr (\ x -> (x, x)) >>> first f >>> arr (\ (y, x) -> (x, y)) >>> first g >>> arr (\ (z, y) -> y + z))
which consists primarily of composition and "stack" manipulation. (Arrows pass multiple values around with nested tuples, but they don't prefer one side or the other, so can be more tree-like than stack-like.) By Dave Menendez at Thu, 2005-08-11 23:55 | login or register to post comments
Joy in Haskell
Things get a little interesting when you get to the recursion combinators, but with a dash of type-system wizardary, you can write most any Joy combinator in Haskell. By Greg Buchholz at Fri, 2005-08-12 01:24 | login or register to post comments
language and the lambda calculus. I think that being able to name variables explicitly is a genuine advantage, but obviously, mileage varies. By neelk at Fri, 2005-08-12 19:32 | login or register to post comments
-------------------------------------------------------------------------------
(TRADUCCIN) Lambda
de la ltima
Seleccione la forma que prefiera para mostrar los comentarios y haga clic en "Guardar configuracin" para activar los cambios.
pila, que es probablemente un buen indicio de lo que necesita para romper la palabra en un par de palabras ms intuitivos (o que su interfaz para que la palabra es muy complicado).
Por pkhuong el Mar, 09/08/2005 23:21 | conectarte o regstrese para enviar comentarios
Un montn de factoring
El truco est en las palabras pequeas, fcilmente comprobables. Cada vez que hacer un montn de manipulaciones de la pila es una seal de que necesita de refactorizar. "Pensar Forth 'tiene un montn de buenos consejos en este mbito: http://thinking-forth.sourceforge.net Pginas Manfred von Thun de Joy tambin trata este tema en algunos de los artculos. Su nica en la frmula cuadrtica, por ejemplo: http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-quadratic.html
Por doublec el Mar, 09/08/2005 23:28 | conectarte o regstrese para enviar comentarios
El ejemplo de segundo grado en que la ltima direccin URL es un buen ejemplo ...
... pero creo que solo apoya mi punto de vista. -B + / - sqrt (b ^ 2 - 4 * a * c) ----------------------------2 * a (Definir la frmula cuadrtica (Lambda (abc) (Let ([minusb (- 0 b)] [Radical (sqrt (- (* bb) (* 4 (* ac))))] [Divisor (* 2 a)]) que ([root1 (/ (+ minusb radical) divisor)] [Root2 (/ (- minusb radical) divisor)]) (Cons root1 root2))))) La estructura del cdigo Esquema refleja directamente la estructura de la frmula. De hecho, slo hay diferencias triviales de notacin entre la frmula y el cdigo, como podemos ver por escrito la frmula con las mismas variables auxiliares y en la notacin de prefijo: cuadrtica-frmula (a, b, c) def = (root1, root2) donde minusb = - 0 b radical = sqrt (- (* bb) (* 4 (* ac)))) divisor = * 2 un root1 = / (+ minusb radical) divisor root2 = / (- minusb radical) divisor Compare eso con la 2 versin de la Alegra (el que sugiri a ser "el mejor de los tres en la presente nota"): cuadrtica-2 == # abc => [root1 root2] [[[Pop, pop, 2 *] # divisor [Pop 0 swap -] # minusb [* Dup intercambio de resumen * 4 * - sqrt]] # radical [I] mapa] ternaria i [[[+ Swap /] # root1 [- Swap /]] # root2 [I] mapa]
ternario. ** Eh? ** Yo no creo que sea slo mi falta de familiaridad con la alegra que est en juego aqu: la estructura del cdigo es simplemente muy diferente de la frmula. Es imposible entender lo que este cdigo est haciendo sin pensar en cmo se utiliza una pila que no est presente en el problema original (y que pueden corresponderse con el problema de muchas maneras diferentes). La diferencia que ms se destaca es que ninguna de las variables se mencionan explcitamente los nombres de la derecha son slo comentarios. Estos comentarios no son equivalentes a la utilizacin de nombres en el cdigo - Qu pasa si se equivocaron? Las variables tienen que ser referido por el seguimiento de su posicin en la pila (que es lo que estaba recibiendo menos en la referencia a los ndices de De Bruijn por encima). Aceptar, es posible para mitigar este problema reduciendo al mnimo el tamao de la pila en un momento dado, pero el riesgo de cometer un error va a ser mayor para cualquier nmero dado de variables que en un programa equivalente que utiliza nombres. Una proporcin de esos errores no se pueden encontrar por medio de pruebas. Para que un programa grande, que es probable que se suman a un riesgo inaceptable. [Edit: corregido "0 - b" a "- 0 b"]
Por dhopwood el Mi, 10/08/2005 03:51 | conectarte o regstrese para enviar comentarios
No es el punto de
Slo integrar un lenguaje de dominio especfico para la aritmtica infijo y acabar de una vez. Y entonces usted no est escribiendo la aritmtica en un lenguaje de concatenacin. El ejemplo que acaba de pasar a utilizar la aritmtica, sin embargo, las crticas (por ejemplo, sobre el potencial error de la propensin de la pila de forma implcita la introduccin de variables sin nombres) no son especficos de este caso. Es muy fcil escribir un lenguaje que compila a cdigo concatenativa que luego pasa por compilador nativo Factor.
Derecho - Yo no he dicho que las lenguas concatenativa no son adecuados como lenguajes intermedios (aunque me pregunto por qu los usara en lugar de un IL basado en registros, si ests dirigidas a un equipo basado en registros). He encontrado que el tiempo que va de sealar un error, para resolverlo, para arreglarlo es muy corto en el factor. Y en los idiomas aplicativos, as, por lo general, si usted nota el error. ... jEdit est escrito en Java. Al igual que todo el software de Java, es un dolor para mantener, ... Yo sin querer defender a Java?
Por dhopwood el Mi, 10/08/2005 14:07 | conectarte o regstrese para enviar comentarios
Mnadas
Slo integrar un lenguaje de dominio especfico para la aritmtica infijo y acabar de una vez. Y entonces usted no est escribiendo la aritmtica en un lenguaje de concatenacin. Vamos a tratar de volver a escribir que un poco diferente ... Slo tiene que utilizar las mnadas (como un embebido de dominio especfico del lenguaje) para la secuenciacin de los efectos secundarios y que hacer con ella. Y entonces usted no est escribiendo funciones en un lenguaje puro.
Por Greg Buchholz el Mi, 08/10/2005 18:02 | conectarte o regstrese para enviar comentarios
Estoy de acuerdo
Estoy de acuerdo. El corolario a su punto acerca de los nombres es la presencia de todos los nombres en el cdigo de la alegra que nada tienen que ver con el dominio del problema, como el pop, intercambiar, DUP, y el resumen. Eso es un signo clsico de un lenguaje de bajo nivel (pensar en ensamblador). Adems, en los idiomas concatenativa, el programador depende en gran medida abstracciones que el lenguaje no cumplir ni siquiera saba acerca: cada vez que utilice el pop, que est haciendo una suposicin sobre lo que est en la pila, pero esa suposicin podra ser violada por relacin cdigo, como si de una expresin como: (Let ((x 33)) (Foo)
x) ... Podra devolver un valor distinto de 33, si foo tiene un error. Este tipo de efecto secundario que no es local para los insectos es una reminiscencia de algo as como un error de puntero. En otros contextos, este tipo de comportamiento se conoce como "inseguro": que una abstraccin no est protegido de la interferencia externa no autorizada. En el contexto de concatenacin, se puede jugar abogado de lenguaje y afirman que puesto que el lenguaje no tiene la idea de una unin local, no se est violando. Sin embargo, eso slo sera un buen argumento si los programadores no necesitan depender en gran medida de falsificacin de enlaces locales en los idiomas concatenativa.
Por Anton van Straaten el Mi, 08/10/2005 15:48 | conectarte o regstrese para enviar comentarios
Incorrecto
El corolario a su punto acerca de los nombres es la presencia de todos los nombres en el cdigo de la alegra que nada tienen que ver con el dominio del problema, como el pop, intercambiar, DUP, y el paquete de continuacin de Qu pasa con nombres como vamos, las etiquetas, de varios valores de vinculacin y setf en los programas comunes de Lisp? Tambin tienen nada que ver con el dominio del problema. Es el nivel CL baja ", como ensamblador"? Este tipo de efecto secundario que no es local para los insectos es una reminiscencia de algo as como un error de puntero.En otros contextos, este tipo de comportamiento se conoce como "inseguro": que una abstraccin no est protegido de la interferencia externa no autorizada. Factor tiene un efecto de chimenea inferencer que podra ser utilizado para comprobar que se loquesea, de hecho, devolver los valores correctos. Supongamos que escribir una funcin que toma un objeto mutable como entrada, y que viola su contrato por mutacin del objeto en un estado incorrecto, cmo es esto diferente de una funcin con un efecto de chimenea no vlida? Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos?
Por slava el Mi, 08/10/2005 16:11 | conectarte o regstrese para enviar comentarios
Factor tiene un efecto de chimenea inferencer que podra ser utilizado para comprobar que se loquesea, de hecho, devolver los valores correctos. Podra ser utilizado? Esto implica que esto no es algo que ocurre de forma automtica, para garantizar de manera transparente la seguridad de los enlaces relacionados "" (o como quiera llamarlos en un idioma concatenativa). Si ese es el caso, entonces esto crea un rea que el usuario tenga que preocuparse, que de otro modo no hay que preocuparse. Es difcil ver cmo eso sera un beneficio. Supongamos que escribir una funcin que toma un objeto mutable como entrada, y que viola su contrato por mutacin del objeto en un estado incorrecto, cmo es esto diferente de una funcin con un efecto de chimenea no vlida? Es diferente en que la persona que llama sabe que ha pasado un objeto mutable a esa funcin - no es un caso en que una funcin esencialmente sin relacin puede destruir una abstraccin que no tiene conexin a. Sin embargo, es cierto que los efectos secundarios son una causa importante de problemas, por lo que las lenguas ms potentes proporcionan maneras de controlar los efectos secundarios. La funcin de todos los programas dependen implcitamente en una pila que puede ser directamente lado-a cabo mediante cualquier funcin es un paso atrs a este respecto. Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos? En primer lugar, es demostrablemente falsa de que son "exclusivamente el dominio de los acadmicos". Simple refutacin: Erlang se utiliza para ejecutar los interruptores de telecomunicaciones, y es un lenguaje puramente funcional. Hay muchos ms ejemplos de dnde sali eso. En segundo lugar, mientras que no se usan lenguajes puramente funcionales en todos los contextos, hago uso de las lenguas que fomenten y apoyen el control de los efectos secundarios, y lo hago escribir cdigo que evita los efectos secundarios en su mayor parte. Esto tiene numerosas ventajas demostrables - si usted desea comenzar a discutir en contra de ellos, me gustara sealar que aqu . Para que no se tomar como totalmente negativa sobre las lenguas concatenativa, debo mencionar que he escrito en Forth sistemas, experimentado con alegra, y ha escrito un intrprete de la alegra de juguete. Creo que son lenguajes muy interesantes, y puede ser divertido y til en ciertos tipos de aplicaciones, pero no creo que los idiomas pueden apoyar este tipo de afirmaciones que parecen querer hacer. Si el factor puede apoyar tales afirmaciones, eso sera interesante, pero no vamos a convencer a mucha gente por fanfarroneando acerca de ello y haciendo over-the-top crticas de otros idiomas y probadas.
Por Anton van Straaten el Mi, 08/10/2005 17:02 | conectarte o regstrese para enviar comentarios
Ms reclamos incorrectos
Qu lgica se utiliza para justificar la idea de que slo puede tener nombres globales? No estoy seguro de lo que la lgica se usa, porque yo no lo sigue. Factor tiene variables y espacios de nombres anidados y alcance. Saba usted siquiera se molest en mirarlo? Podra ser utilizado? Esto implica que esto no es algo que ocurre de forma automtica, para garantizar de manera transparente la seguridad de los enlaces relacionados "" (o como quiera llamarlos en un idioma concatenativa). Si ese es el caso, entonces esto crea un rea que el usuario tenga que preocuparse, que de otro modo no hay que preocuparse. Es difcil ver cmo eso sera un beneficio.
Yo podra aadir algunas herramientas para hacer esto automticamente para cada definicin de la palabra nueva. Para que no se tomar como totalmente negativa sobre las lenguas concatenativa, debo mencionar que he escrito en Forth sistemas, experimentado con alegra, y ha escrito un intrprete de la alegra de juguete. As que su nica experiencia con lenguas concatenativa es haber escrito un intrprete de la alegra del juguete? Si yo escribo un intrprete de lenguaje de juguete para un aplicativo simple, como esquema, a continuacin, observe que este intrprete de juguete es difcil de trabajar, debo concluir que lenguas aplicativos en su conjunto son intiles? Si el factor puede apoyar tales afirmaciones, eso sera interesante, pero no vamos a convencer a mucha gente por fanfarroneando acerca de ello y haciendo over-the-top crticas de otros idiomas y probadas. Mostrar donde he estado "fanfarrn" y haciendo over-the-top crticas.
Por slava el Mi, 08/10/2005 17:09 | conectarte o regstrese para enviar comentarios
Este es un nivel ms rico de la funcionalidad que ofrecen la mayora de las lenguas tratadas en este sitio web incluso se podra concluir que las lenguas aplicativos son totalmente improductivos, y ni siquiera capaz de implementar sus propios compiladores nativos de optimizacin. Se va a argumentar en favor de las lenguas puramente funcionales, que son exclusivamente el dominio de los acadmicos?
Por Anton van Straaten el Mi, 08/10/2005 19:55 | conectarte o regstrese para enviar comentarios
Entonces, por qu haces lo equivalente aqu? Este es un foro para la discusin de los lenguajes de programacin. Estoy interesado en la semntica de los lenguajes de programacin, y entre otras cosas, como estos se relacionan con la semntica de la experiencia de usuario de programacin. Mi post original en este tema no fue pensado como una crtica de Kont especficamente (y, adems, Brian Mastenbrook ya ha escuchado mi opinin en este tema). Ms bien, fue un intento de caracterizar uno de los problemas semnticos que distinguen idiomas concatenacin de lenguajes basados en nombre con localesprocedimiento de abstracciones (tambin conocidas como variables locales). Creo que este tema es uno de los factores que subyace una reaccin negativa al usuario comn a estos idiomas. Como alguien que se interesa en los idiomas concatenativa, yo creo que es posible que tenga algunas ideas interesantes sobre el asunto, pero por alguna razn, usted no ha optado por ofrecer a aquellos que an no. Otras personas tienen, sin embargo, que es la razn por la que estoy aqu.
Por Anton van Straaten el Jue, 08/11/2005 00:20 | conectarte o regstrese para enviar comentarios
Me dijo idiomas concatenativa muestran un signo particular de ser un lenguaje de bajo nivel. No creo que los idiomas son ms concatenacin de bajo nivel. Acaban de manejar las cosas de manera diferente, de una manera que otros idiomas pasar a usar como una forma intermedia.Esto no significa en absoluto que es peor, slo diferentes. Si un compilador de C utiliza una forma intermedia que es la asignacin de un solo, eso significa que C de alto nivel que cualquier otro tipo de forma de asignacin nica? Por supuesto que no.
Ms bien, fue un intento de caracterizar uno de los problemas semnticos que distinguen idiomas concatenacin de lenguajes basados en nombre con locales-procedimiento de abstracciones (tambin conocidas como variables locales). Creo que este tema es uno de los factores que subyace una reaccin negativa al usuario comn a estos idiomas. Cmo son abstracciones variables locales? Podra explicar eso? En qu son ms abstractos y flexibles que los operadores de la pila arrastrando los pies?
Por Daniel Ehrenberg el Jue, 11/08/2005 17:33 | conectarte o regstrese para enviar comentarios
lengua en una situacin dada, y ayuda con la corte a travs de una campaa sobre los lenguajes de programacin especficos. No creo que los idiomas son ms concatenacin de bajo nivel. Acaban de manejar las cosas de manera diferente, de una manera que otros idiomas pasar a usar como una forma intermedia. Esto no significa en absoluto que es peor, slo diferentes. Esta cuestin en particular es diferente de una manera que sea ms bajo nivel. Esto ha sido cubierto en unos pocos puestos, y se conecta a la observacin de Alan Perlis "que bajo nivel implica que" los programas requieren la atencin a lo irrelevante ". Lo irrelevante en este caso es la manipulacin directa de pila, la que ha reconocido: "slo hay unos pocos casos en que la pila que realmente importa". Usted ha sugerido en otro lugar que lo mismo puede decirse de los nombres de lxico, pero he tratado este tema en mi respuesta a su ltimo punto ms adelante. Si un compilador de C utiliza una forma intermedia que es la asignacin de un solo, eso significa que C de alto nivel que cualquier otro tipo de forma de asignacin nica? Por supuesto que no. Cdigo C es, sin duda por lo general de ms alto nivel que cualquier interno de una sola asignacin de forma que es probable que ser compilado para. La forma en que se puede decir que es mirar a las operaciones adicionales que se estn realizando en el cdigo de un solo destino, para hacer frente a las limitaciones de la asignacin nica - el hecho de que esas operaciones adicionales no son necesarias en el cdigo fuente en C indica que son irrelevantes para la expresin del problema original, e indica que la forma de una sola asignacin es menor nivel. Si su preocupacin es con la generalizacin, es decir, extraer conclusiones acerca de "... cualquier tipo de forma de asignacin nica", eso no es lo que estoy haciendo. Estoy buscando a un determinado idioma que regularmente integrar directos operadores de manipulacin de la pila en su cdigo, y la conclusin de que esta es una manera de bajo nivel de lidiar con la abstraccin sobre los valores que el uso de nombres, en la mayora de los casos. Cmo son abstracciones variables locales? Podra explicar eso? En qu son ms abstractos y flexibles que los operadores de la pila arrastrando los pies? Una variable local es una abstraccin de la misma manera que cualquier nombre es una abstraccin. El nombre se refiere a otra cosa, y podemos usar el nombre y manipular lo que se refiere a sin saber qu cosa especfica que se refiere. Esa es la abstraccin. Cuando se presiona un valor en una pila y despus aplicar alguna operacin con ese valor, que es tambin una especie de abstraccin. Sin embargo, no todas las abstracciones son iguales.Manipulacin de la pila directa por s sola no escala muy bien, como todos los programadores concatenativa ya lo sabe. Puesto que los procedimientos se hacen ms grandes, hacer el seguimiento de los valores a travs de manipulacin de la pila se vuelve ms complejo. Al principio del hilo, se nos record que "cada vez que hacer un montn de manipulaciones de la pila es una seal de que es necesario refactorizar". Y qu haces cuando refactorizar? Para crear los nombres, los cuales (implementado correctamente) son ms escalables, ms seguras que las abstracciones almacenamiento de valores en una pila. As que una pregunta es, si los nombres son tan tiles e importantes cuando las escalas de cdigo ms all de un procedimiento nico, no podran ser de utilidad a nivel local, dentro de un procedimiento? Y, por supuesto, de muchos idiomas demostrar que son. Una de las caractersticas fundamentales de los nombres es que, efectivamente, puede servir a dos propsitos: 1. Un nombre acta como una forma de referirse a un valor, es decir, un nombre
2.
es una "referencia", o se puede decir que "denotan" cierto valor. Un nombre puede proporcionar informacin a los programadores sobre el valor, o su papel en el clculo, es decir, un nombre tambin puede tener un "sentido" o "connotacin".
Es por esto que tratamos de elegir los nombres de informativos, y por qu la expresin que se dio en otras partes de este tema fue compiling dataflow optimize linearize simplify generate , en vez de abcdef . Definido apropiadamente, ambas expresiones podran tener el efecto exacto de clculo misma, pero una es mucho ms significativo para nosotros que el otro. Cuando omitimos los nombres de nuestro cdigo, estamos omitiendo una de las pocas cosas que conecta el cdigo para el problema que est resolviendo, y eso es algo que debe hacerse cuidadosamente. Si una lengua comn obliga a omitir los nombres, eso es algo que vale la pena echar un vistazo de cerca. Usando una pila de valores abstractos casi nunca sirve al propsito segundos arriba. Sin embargo, nombrando a todos los valores no es necesaria o apropiada, ya sea. En la expresin antes citada, no es necesario nombrar a los valores intermedios, y la mayora de los idiomas admiten este tipo de omisin de nombres en una forma u otra. Sin embargo, la omisin de los nombres es una cosa y su sustitucin por los operadores que manipulan una abstraccin (la pila) que no es otra cosa directamente relacionados con la computacin es algo completamente distinto. Uso de los operadores de la pila barajar es un detalle de bajo nivel no importa cmo se mire, y siempre que ello sustituye el uso de nombres que podran ser informativo, que tiene una doble prdida de la capacidad del programa para expresar su algoritmo para el lector humano. En general, es difcil argumentar que la expresin de la mayora de los problemas en trminos de una pila LIFO, esencialmente es un enfoque que tiene sentido en s mismo. Los argumentos ms convincentes, creo, tienen que ver con los beneficios de ese enfoque de la implementacin del lenguaje: empujando parte del trabajo de arreglar el mecanismo de abstraccin en el usuario, el trabajo de la implementacin del lenguaje se ha simplificado, que puede tener varios beneficios . Estoy seguro de que hay mucho en lo que he escrito arriba de un ventilador concatenativa que objetar. Tenga en cuenta que he programado en estos idiomas, y estoy familiarizado con los argumentos. Yo podra hacerme: no es necesario debido a los nombres locales de factoring mantiene procedimientos lo suficientemente pequeas para ser manejables, no locales los nombres usados en conjuncin con la pila a nivel local funciona muy bien la mayor parte del tiempo, los idiomas que te obligan a nombrar por ejemplo, valores de la funcin de retorno slo para ser capaz de cambiar su posicin para pasar a otra funcin similar se oscurece la solucin al problema, etc etc Mi principal inters est en la caracterizacin de las diferencias de una manera ms especfica que la que acaba de decir algo en la lnea de " No me gusta esto ", o" Creo que esta es una forma divertida de programa ". Obviamente, hay diferencias, pero si no podemos describir con claridad el efecto que no sea simplemente hacer una lista de las caractersticas tcnicas, lo que implica un vaco en nuestra comprensin.
Por Anton van Straaten el Vie, 12/08/2005 03:55 | ingresar o registrarse para enviar comentarios
2Variable
En # comment-8723 , Antn escribi:
As que una pregunta es, si los nombres son tan tiles e importantes cuando las escalas de cdigo ms all de un procedimiento nico, no podran ser de utilidad a nivel local, dentro de un procedimiento? Y, por supuesto, de muchos idiomas demostrar que son. I'm not sure if the following quote supports your contention. I ran into it while skimming through Forth Sorts Fruit Down Under by Paul Frenger in ACM SIGPLAN Notices , vol. 40, nm. 8, aug 2005 (not yet online ): The CF [ColdForth] manual has other interesting chapters to describe its features: numbers/parameters (CF supports local variables via stack frames), the memory manager, I/O handling (mostly for accessing files), strings, object-oriented extensions, TCP/IP capability and the multi-tasker, for example.
By el-vadimo at Wed, 2005-09-14 12:11 | login or register to post comments
2drop
Even concatenative languages acknowledge that names are important, since they all support global names. What logic is used to justify the idea that you can only have global names? It boils down to something like "because it's easy to implement a language that way, and possible to get away with it and work around it". That's not a good argument from the language user's perspective. It might explain why you, as an implementor of such a language, are enthusiastic about them, though. Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring. In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. However, it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J? Simple refutation: Erlang is used to run telecom switches, and it's a purely functional language. Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). If Factor can support such claims, that would be interesting, but you're not going to convince many people by blustering about it and making over-the-top criticisms of other, proven languages. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'.
By Mackenzie Straight at Wed, 2005-08-10 23:38 | login or register to post comments
Stacks of responses
Aside from the fact that several concatenative languages do actually support local variables Do those local variables protect against the issue I've raised, ie are they safe from being changed by unrelated stack operations? the reason they're frowned upon is because in most cases they are an impediment to proper factoring. Is that an issue specific to the concatenative languages? Otherwise, I find it hard to see how names are an impediment, in languages with traditional local variables. Can you give an example?
In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. I don't see how using "push" and "pop" to abstract values helps address that, in most cases. There are cases where a stack is certainly the right abstraction to use, of course. Believe it or not, not everyone considers Haskell and friends to be the pinnacle of language design or expressive power. I didn't say that. I said that "it's true that side effects are a major cause of problems, which is why the most powerful languages provide ways to control side effects." I would include many imperative languages in that list, including eg C++, Java, Ada languages which provide qualifiers like "const" that allow the programmer to guard in a meaningful way against unwanted side effects, not just on individual variables but also over larger scopes, such as declaring that a method cannot side-effect the object it's associated with. Controlling of side effects is a major theme in very demanding systems, not just at the language level but in the design of systems. Functional languages just implement this at the language level on a more comprehensive basis. It's worrying to me that this claim of "most powerful" is accepted here essentially on faith. How do you define power? Hopefully my response above gives a better idea of what I was getting at. Obviously, "power" is one of those things, like "expressiveness", which is difficult to formalize satisfactorily, but it could perhaps be characterized as the ability to handle very demanding tasks, preferably demonstrated in practice. The APL dialects and derivatives, for example, are all much more expressive than any purely functional language. What do you think of tacit programming in J? I consider tacit programming which has strong similarities to point-free style to be higherlevel than direct stack manipulation, for the reasons stated elsewhere in this thread. However, all of these things have pros and cons. The cons tend to have to do with being more difficult to reason about, especially as expressions scale up. Ideally, a language should be able to support a variety of styles, so that the appropriate one can be picked in a given situation. Of all of these, I don't see direct stack manipulation as one that's likely to be the best choice in most situations. On the subject of scaling up expressions, the traditional concatenative answer to that is that you should factor larger expressions into multiple words. However, that means you're relying on names for abstraction again, just on a scale greater than the local procedure. How does this reconcile with the idea that using local names is an impediment to refactoring? Erlang is not purely functional (ets, dets, mnesia, process dictionary, the list goes on). Define it however you like very functional? Do you consider Haskell purely functional, even with the I/O monad and the things it can talk to? My real point was that a strong emphasis on avoidance of side effects is not something that's solely the domain of academics. I consider the definition of "purely" here to be a quibble eg anything that interacts with a traditional database is ultimately impure, when considered as a whole system. The original poster likened concatenative programming to "writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices." Perhaps you should reconsider whose claims are 'over the top'. That's a fair point, although if you focus purely on how local value abstraction works, I can see his point. Having the option to use de Bruijn indices might be an improvement over having no alternative to direct stack manipulation, and I don't think it's over the top to say that. Also, the original poster did also make some other real points.
By Anton van Straaten at Thu, 2005-08-11 01:18 | login or register to post comments
Qu hay en un nombre?
Aside from the fact that several concatenative languages do actually support local variables, the reason they're frowned upon is because in most cases they are an impediment to proper factoring.
In a great deal of applicative code, the variables are simply assigned useless placeholder names. This should indicate something to you: most of the time, those names are unnecessary. Programmers, and typical programming styles in various languages, differ considerably in how many named intermediate variables they use (personally I find the number of intermediates in the above Scheme code a bit excessive). But it's generally agreed , I think, that introducing variables with useless names is bad style. In functional-applicative languages, it's almost always possible to avoid this. The need for temporaries with useless names, eg loop indices, in imperative-applicative languages (some more than others) is indeed a valid criticism of them.
By dhopwood at Thu, 2005-08-11 01:26 | login or register to post comments
Types?
Estoy de acuerdo. The corollary to your point about names is the presence of all the names in the Joy code which have nothing to do with the problem domain, like pop, swap, dup, and rollup. That's a classic sign of a low-level language (think of assembler). Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp )? Or are all linear langauges low-level by definition? Also, in concatenative languages, the programmer depends heavily on abstractions which the language doesn't enforce or even know about: every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code That sounds like a plea for static typing. ...every time you use pop, you're making an assumption about what's on the stack, but that assumption could be violated by unrelated code Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? Especially in the presence of macros? (apply quadratic-formula '(1 0 -4)) (apply quadratic-formula (reverse '(1 0 -4))
More Linear
For better reference to linear lanugages (which also mentions the quadradic formula!) see Linear Logic and Permutation Stacks--The Forth Shall Be First
By Greg Buchholz at Wed, 2005-08-10 17:44 | login or register to post comments
Las diferencias
Are the stack manipulation functions that you mention (pop, swap, dup, etc.) a sign of a low-level language or a linear language (like Linear Lisp)? Or are all linear langauges low-level by definition? I would say that Linear Lisp is certainly lower-level than ordinary Lisp, yes. I'll admit I can't give a formal definition of "low-level" off the top of my head, but I'd be interested to hear (reasoned) arguments about how something like Linear Lisp or the concatenative languages are not lowerlevel. That sounds like a plea for static typing. S, lo es. The question is what the costs of introducing it are. The costs of introducing a safe name abstraction are negative. The costs of introducing static typing are positive, and arguably not insignificant. In the static typing case, I find the costs high enough that I can easily justify using it in some cases, and avoiding it in others. In the case of relying primarily on direct interaction with a stack to store values, I can easily justify avoiding that in most cases. Is it really that much different from the applicative language example? If your function takes three integers, how do you know you're calling it with the arguments in the correct order? It's different at least in the sense that that's a local effect: the error is confined to the interface between the caller and the callee, and doesn't compromise unrelated abstractions elsewhere in the program. I'm not saying that any language with named bindings solves all such problems, I'm just saying that there's a spectrum of safety and levels of abstraction, and that a language with named bindings which can make some guarantees about those bindings is both safer and higher-level than a language that doesn't have those features.
By Anton van Straaten at Wed, 2005-08-10 18:10 | login or register to post comments
Joy code sample near the beginning of this thread provides an example of low-level code, by my definition and Alan Perlis'.
By Anton van Straaten at Wed, 2005-08-10 20:04 | login or register to post comments
"Low level"
I'll admit I can't give a formal definition of "low-level" off the top of my head There's always Alan Perlis's definition: A programming language is low level when its programs require attention to the irrelevant. Although I don't think it's really completely correct to label languages high or low level. At most you can say that writing in a given language generally results in high or low level code . Anyway, by that definition, concatenative code is low-level because it forces you to write your solution in terms of a stack, despite the fact that there's no stack in your problem specification. Likewise, monads result in low-level code, because you're forced to think about category theory just to print "hello" :-)
By Dan Winship at Wed, 2005-08-10 19:30 | login or register to post comments
Analogy...
In the concatenative context, you can play language lawyer and claim that since the language doesn't have the notion of a local binding, nothing is being violated. However, that would only be a good argument if programmers didn't need to rely heavily on faking local bindings in concatenative languages. Sounds a lot like type errors in dynamically checked languages... But now I guess I'm the one trolling... ;)
By Matt Hellige at Wed, 2005-08-10 17:31 | login or register to post comments
Static analysis
The Factor language apparently includes a stack effect inference tool which performs such analysis. The documentation mentions combining unit testing with stack effect inferencing to check the stack effect of procedures (words). This seems like a lot of trouble to go to to achieve a kind of safety that other languages get for free. I'm not saying that this issue alone makes concatenative languages unfit for human consumption. However, I think the negative reaction to the tractability of these languages that's been expressed by some posters in this thread is not an uncommon one, and I've attempted to identify a couple of the semantic issues which underly this reaction. In this particular case, I'm saying that even if you get past the lack of procedure-local name/value bindings, that the approach these languages take to local value abstraction is less safe and requires more care than languages with name-based local abstraction.
By Anton van Straaten at Wed, 2005-08-10 22:35 | login or register to post comments
empty stacks
Likewise, executing pop on an empty stack likewise produces
Empty stack is a special case that's not as problematic, because it can be detected and flagged when it happens. The cases that are problematic are when the stack is not empty, but is out of sync with the program's expectations. That's when the programmer's implicit abstractions are violated.
There are no problems at all about empty stacks. Factor solves the stack underflow problem by mprotect ing (or something like that) the memory directly underneath the stack, triggering a segfault if there's underflow. Should this happen, the segfault can be caught like any other exception, and it the toplevel of the repl, it's caught automatically and printed out, also like any other exception. All this stuff about is useless outside of research. An infinite loop has a very different outcome than stack underflow.
By Daniel Ehrenberg at Thu, 2005-08-11 17:50 | login or register to post comments
{ quadratic | abc } { roots ~ b sqrt donde { roots | xyz } => which seems clear enough,
disclaimer - i can't remember the exact details of otuto v1, and v2 is stalled until i gain enough patience to return to the uml tool (i'm trying to implement it in c++ for various (vaguely financial) reasons).
By andrew cooke at Wed, 2005-08-10 16:50 | login or register to post comments
concrete example
OK perhaps you can have hidden lexical scope pointers and use a cord. But this seems to create a pretty confusing language. For a simple example suppose we have a Joy-like language plus an operator 'let' that takes a value, a quoted symbol, and another quotation which is evaluated in an environment where that symbol is bound to the value. 2 [x] [x 1 +] let so that the above returns 3. ahora: 1 [x] [[x]] let 2 [x] [[x +]] let concat this returns the concatenated function: [xx +] however each x is bound to a different value. [xx +] i returns 3
By James McCartney at Thu, 2005-08-11 19:13 | login or register to post comments
{ dup | x } => { xx } but that's beside the point. all that the otuto definitions describe is how you rearrange the stack(s). you could implement them by cross-compiling to an intermediate representation without variable names (using swap, roll etc etc) (i believe - i've never bothered to prove it, but i would be amazed if it were not true). i'm not 100% convinced i understand what concatenative is, so i can't say otuto is concatenative, but i'm pretty sure that those variable names are just sugar. a long time back, when i mentioned this on the joy email list, someone added something similar to another language there.
By andrew cooke at Fri, 2005-08-12 22:01 | login or register to post comments
otuto
As I understand it, if a language is concatenative then functions are sequences of operators where you can take any function and split it at any point and you have two valid functions, or you can join two functions and have a valid function. I'm not sure what it would mean to split or join functions in otuto, since they are rules with 4 separate sections, not simple sequences of operations.
By James McCartney at Sat, 2005-08-13 16:32 | login or register to post comments
concatenative is ..
A language in which the concatenation of programs denote the composition of functions has been called a concatenative language. I first heard this terminology from Billy Tanksley. -- Manfred von Thun otuto is concatenative. so is XY . Here's one version of quadratic in that language, which uses names to similar effect (that is, to map items from the stack to multiple locations in the body of the code): ; quadratic { [abc] b -: ac * 4 * b 2 ^ -. .5 ^ [1 -1] * + a 2 * % } ; regarding your proposal for cross-compilation, i think it would be sufficient to compile programs like quadratic , in which names are used to map items from the stack to multiple locations, to programs which consist of an initial reordering of items on the stack, followed by code which is name-free. Billy Tanksley has suggested a notation (supported in XY) in which an initial "shuffle-pattern" is followed by name-free code. For example, compile
{ [abc] ab * ac * + } a abc--abac * [*] dip + [edited to show an example of compilation from pattern-notation to shuffle-notation]
By sa at Sat, 2005-08-13 23:37 | login or register to post comments
Squinting
I don't think squinting is necessary at all. In fact, lately I've been thinking that there might be a benefit for newbies by writing a monad tutorial using a language like Joy. Subroutines (quotations) in Joy are lists of functions and you interpret them by folding in a 'compose' and applying to a stack... fold compose id [1 4 dup * +] Replace 'compose' with 'bind' for your favorite monad, and I think you might be on your way to removing much of the mystery behing monads (or at least you've given them an opportunity to see monads that aren't tied up in Haskell syntax and type-classes).
By Greg Buchholz at Mon, 2005-08-15 14:25 | login or register to post comments
Looks applicative to me
I don't know whether otutu as a whole is applicative or a hybrid, but the above code is definitely applicative. (RPN vs prefix vs infix is a separate, and much less significant, issue.)
By dhopwood at Thu, 2005-08-11 01:48 | login or register to post comments
between the programmer and the language lies squarely on the development tools at one's disposal, and the simplicity of concatenative languages allows one to develop such tools more easily than with other languages. A meta-circular interpreter is a page of code in Factor, and it is not an academic exercise; it is used to implement a single-stepper. I'd like to see a single-stepper in another language that is equally clear. There is a static stack effect inferencer tool, also used by the compiler. The compiler takes maximal advantage of the simple semantics to implement the concatenative equivalent of "lambda lifting", and various other optimizations such as partial evaluation. I'm not sure what you mean by "How can you build reusable encapsulated abstractions?". If you look at Factor's library, you will see a very high degree of abstraction and code reuse. There is no "boilerplate" code. Re: "writing a Turing machine program"; after working with Factor's libraries for sequences and mathematics, going back to a language like Java with annoyingly hand-coded boilerplate loops seems more like a Turing machine. Your emphasis on the word "seem" suggests that you don't believe that concatenative languages are used to develop non-trivial applications. Well, notwithstanding the clear fact that there is enough commercial demand to keep several Forth vendors in business, Factor has an HTTP server with a continuation-based web framework, a GUI toolkit-in-progress, various development tools, and more. This is a richer level of functionality than offered by most languages discussed on this web site, so ironically, one could even conclude that applicative languages are totally non-productive, and not even capable of implementing their own optimizing native compilers.
By slava at Wed, 2005-08-10 06:30 | login or register to post comments
Irrelevant
Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. "Research language" is a euphemism for "useless language". I'm really only interested in practical concerns. I write code, and I like to write stable code quickly, and I find that concatenative languages let me do this just as easily as applicative ones, with the additional advatange that writing tools is easier. You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me , thinking in terms of a stack is not a problem.
By slava at Wed, 2005-08-10 16:08 | login or register to post comments
For me...
You can discuss semantics, static type systems, weakly final coalgebras and other academic concerns until you are blue in the face, but it will not change the fact that for me, thinking in terms of a stack is not a problem. Tal vez. However, the next person who has to deal with your code may indeed have a problem. I could truthfully say "for me, dealing with unrestricted pointers in C is not a problem", but the broader consequences of everyone holding that opinion are not pretty.
By Anton van Straaten at Wed, 2005-08-10 16:13 | login or register to post comments
Even though this web site disproportionately discusses research languages and prototype language implementations, still I think that assertion is very likely false. And also beside the point, since the maturity of language implementations is not the topic of discussion. I, unlike Slava, am OK with research languages. They're good for research. As lothed as Slava is to admit it, Factor isn't really suitable for most applications yet. Of course, it is intended for use, but
Factor's libraries aren't mature enough, especially crucial ones like the GUI library. Factor has a pretty good FFI, but that's not because it's concatenative. We really shouldn't discuss this, though, you're right.
For example, Forth-like languages are often used in constrained embedded environments, in which case some of these concerns may be balanced by other advantages of these languages.
Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative.
BTW, Common Lisp and many Schemes have something like a "trace" macro that provides similar functionality to the word watch, although the implementation of that is more than 3 or 4 lines. They usually print the incoming arguments and the return value, ie both the entry and exit from a traced function. Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word ( \ wordname watch ). One thing about Factor that's great but unrelated to its concatenativity is that you can mutate stuff like word bodies, though you don't normally do this outside of debugging. There's probably an applicative language that does that.
By Daniel Ehrenberg at Thu, 2005-08-11 02:24 | login or register to post comments
Well, Factor isn't really that into constrained embedded environments, it's more for general programming, which can be done better because of it's abstraction capabilities, some of which rest on its being concatenative. Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. Not only does it take more code to do such a simple task but you have to alter the original code to do it. In Factor, at the repl, you can just enter one short line of code to watch a word (\ wordname watch). The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name) . No alteration of the original code is required.
By Anton van Straaten at Thu, 2005-08-11 04:04 | login or register to post comments
Concatenar
If I wanted to write an expression like the above, I could do it quite easily in many higher-order functional languages, using a fold over a list of procedures, which could easily be made to look something like, eg in Scheme: (concatenate input compiling dataflow optimize linearize simplify generate) How is this any different from embedding an infix language in Factor? The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name). No alteration of the original code is required. But the implementation of Common Lisp's trace is more complicated than the Factor equivalent.
By slava at Thu, 2005-08-11 04:45 | login or register to post comments
(concatenate input compiling dataflow optimize linearize simplify generate) I don't think that code would work since the stack effects of many of those words are not ( a -- b ). The only way to make it work would be to have each of those functions take a stack and return a stack, but that's just like implementing a lightweight concatenative language on top of Scheme. Incidentally, I believe someone has done that with Common Lisp...
Since stacks are irrelevant to most problems, unless you can abstract its use away, the need to use one seems to me like a hit against a language's abstraction capabilities. You're right, I guess; there are only a few cases where the stack really matters. It matters like lexically scoped variables matter in normal code, which really is irrelavent to abstraction. But then there are properties of Kont and Factor (but not Forth) that lend it to more abstraction, like the use of combinators to replace macros in languages like Lisp and built-in syntax like C. Combinators to the programmer look like ordinary code that just happens to do something with a quotation,
and they do function when run like that by the interpreter. But then the compiler inlines them to make them as fast as a macro. I'm not explaining them very well, though.
The Lisp/Scheme equivalent, at the repl, is just one short line of code, (trace proc-name). No alteration of the original code is required. En serio? Oh, ok, then never mind that point. I don't have much experience with Lisp development tools.
By Daniel Ehrenberg at Thu, 2005-08-11 16:07 | login or register to post comments
Infectado
Uh-oh, you're showing signs of being infected by stack-oriented thinking,
What a great way to refute somebody's point! This is just like putting your hands over your ears, and saying "neener, neener, I'm not listening". Perhaps you've been infected by Scheme-oriented thinking?
By slava at Fri, 2005-08-12 18:15 | login or register to post comments
: walk-states ( seq states -- ? ) init swap rot [ >r ?hash r> swap ?hash ] each-with end = ; : {| POSTPONE: {{ : :: POSTPONE: {| : -> ; parsing : || POSTPONE: ]] : ;; POSTPONE: ]] : |} POSTPONE: ]] swons ; parsing POSTPONE: [[ ; parsing ; parsing POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: [[ ; parsing POSTPONE: }} POSTPONE: ]] POSTPONE: }} \ walk-states
: final-machine {| init :: CHAR: c -> loop ;; loop :: CHAR: a -> loop || CHAR: d -> loop || CHAR: r -> end ;; end :: CHAR: r -> end |} ; Is there something wrong with that code? It seems to be a bit shorter than the Scheme code. Edit: as Slava correctly pointed out on the #concatenative channel, those cosmetic macros are completely stupid and the first version is really the one that should preferably be used.
By Daniel Ehrenberg at Thu, 2005-08-11 19:43 | login or register to post comments
btw
here's how to activate it "cadar" final-machine o "cadar" c[ad]+r walk-states
: in k, the algorithm is realized as: s:@'[4 256#0;_ic("";"c";"adr";"r");:;(!0;2;2 2 3;3)] m:1 s\_ic por ejemplo, m'("car";"cdr";"cadr";"cddr";"cdar";"caddr") (1 2 2 3 1 2 2 3 1 2 2 2 3 1 2 2 2 3 1 2 2 2 3 1 2 2 2 2 3) a 0 in the return vector indicates that m has entered the error state. on my pentium 4, 3 ghz machine timings are: 10k elements: 0 ms 100k elements: 15 ms 1m elements: 187 ms state-machine is just a special case of index-scan, which in turn is a special case of scanning a function on its input. if you understand +, and you understand summation as repeated application of +, and you understand indexing, then it's a small step to knowing how to code up a statemachine.
By sa at Fri, 2005-08-12 15:52 | login or register to post comments
Repent, sinners
Understandable? Flexible?
That's great guys, unfortunately, it seems to miss the point fairly completely. Recall, Anton's pointer to the talk said (emphasis mine): Shriram Krishnamurthi's talk The Swine Before Perl provides a nice example of how powerful abstraction capabilities can help make code understandable . Neither the C, K, nor Factor code is quite as understandable as that presented in slide 38 of Shriram's talk. What's more (and probably more important), is that neither is as flexible as what he presents either. What he's shown [1] is a simple syntax extension for generating (fairly efficient) automata from these simple, clear descriptions via macros. Given the spec for another automata, your solutions would have you hard-coding your new state tables while Shriram's would have you quickly done with the code and onto some other problem. [1] At least that's what I believe he's shown... I'm only working with the slides, and I know just enough Scheme to get myself in trouble -- feel free to taunt me if I'm wrong. One of these days I'll have to get over the Lisp allergy I picked up when I was a wee lad and really give Scheme a good look. -30 By William D. Neumann at Fri, 2005-08-12 17:50 | login or register to post comments
Flexible?
The Factor code that Dan showed is equivalent to the Scheme code. I'd like you to explain why you think that this: (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end)) Any more flexible than this: : c[ad]+r {{ [[ init {{ [[ CHAR: c loop ]] }} ]] [[ loop {{ [[ CHAR: a loop ]] [[ CHAR: d loop ]] [[ CHAR: r end ]] }} ]] [[ end {{ [[ CHAR: r end ]] }} ]] }} ; The Factor version has a bit more punctuation, because Dan did not extend the syntax; he just used Factor's hashtable literals. Also he uses character literals instead of symbols. And for what its worth, I consider the omission of hashtables from the Scheme standard to be an unacceptable limitation. I don't like the way typical Scheme code is written, using lists (including alists) and symbols for everything. It might look pretty but it just doesn't scale as well as other data structures for most tasks.
By slava at Fri, 2005-08-12 18:00 | login or register to post comments
More confusion?
Unless I'm confused (which happens on a regular basis), I believe you're wrong. The Factor equivalent of this (from pg. 38)... (automaton init (init : (c -> more)) (more : (a -> more) (d -> more) (r -> end)) (end : (r -> end))) ...looks like... : final-machine {| init :: CHAR: c -> loop ;; loop :: CHAR: a -> loop || CHAR: d -> loop || CHAR: r -> end ;; end :: CHAR: r -> end |} ; ...not a lot of difference to my eyes.
By Greg Buchholz at Fri, 2005-08-12 18:00 | login or register to post comments
understandable. flexible.
i don't think i missed either point. for a machine with n states, use an nx 256 matrix m. for each (state : input -> nextstate), assign m[state;input]:nextstate to run on input i: r:1 m\i setting up a particular machine can be done line-by-line, as in the scheme example, or it can be done all at once, as my example shows. no macros, no special syntax. if state-machines are matrices, and recognition is recursive twodimensional indexing, then implement it as such, as directly as possible.
By sa at Fri, 2005-08-12 18:55 | login or register to post comments
Sencillez
This particular task does not even call for a state machine at all: : swine ( sequence -- ? ) "c" ?head >r "r" ?tail r> and [ "ad" contained? ] [ drop f ] ifte ;
Toplevel
I was just going to say that I quite enjoy O'Caml's toplevel(s). :-)
By Paul Snively at Thu, 2005-08-11 03:16 | login or register to post comments
Slipping?
Paul, I'm disappointed. You forgot to mention Tuareg mode . ;oP Of course, GHC and SML/NJ also have nice toplevels. I wouldn't be surprised to find IDEs for them, although I've never looked.
By Anton van Straaten at Thu, 2005-08-11 03:49 | login or register to post comments
Foreign?
I can't imagine writing any nontrivial program in a concatenative language; it seems like a tour de force comparable to writing a Turing machine program, or directly in the lambda calculus using de Bruijn indices.
Borrowing an example from another thread ... class.method().attribute.method( obj.attribute, obj.method() ).getSomething(); ...Is it just me, or does that bear a striking resemblance to concatenative code?
By Greg Buchholz at Wed, 2005-08-10 17:34 | login or register to post comments
nombres
...Is it just me, or does that bear a striking resemblance to concatenative code? expressions are easy enough. Defining functions without having argument names or let bindings to name your terms gets you into drop swap dup rot shenanigans... That kind of stuff is for compilers, not programmers. Up with that, I will not put.
By James McCartney at Wed, 2005-08-10 18:54 | login or register to post comments
Re: names
So why do you troll in this thread?
By slava at Wed, 2005-08-10 19:05 | login or register to post comments
Stack scramblers
I like William Tanksley's stack scramblers idea for concatenative languages. They look like this:
I can never remember which of rollup or rolldown does which of abc-cba or abc-bca. Stack scramblers make this perfectly clear. Factor uses stack scramblers nowadays also.
Umm ... no
Factor uses stack scramblers nowadays also. It does? I'm pretty sure it still uses good old dup, drop, swap, etcetera. I could never remember which ones those stack shufflers were -- until I started actually programming. Then, it was crystal clear.
By Daniel Ehrenberg at Thu, 2005-08-11 16:21 | login or register to post comments
Re: Foreign?
Compare some typical OOP code in four languages: Common Lisp: (baz (bar (make-instance 'foo))) Java: new Foo().bar.baz(); Smalltalk: Foo new bar baz. Factor: <foo> bar baz Factor's concatenative code looks very similar to how Smalltalk unary messages are composed. Note that CL code has to be read right to left, which ironically is a common complaint against *postfix* languages.
By slava at Wed, 2005-08-10 19:02 | login or register to post comments
"kont"
means "ass" in dutch.
By Wouter at Wed, 2005-08-10 21:05 | login or register to post comments
As in "arse" or "donkey"?
As in "arse" or "donkey"?
By Marcin Tustin at Thu, 2005-08-11 14:15 | login or register to post comments
Arse
The dutch word "kont" is etymologically related to the english word "cunt". The original meaning of both words is "hole". ( Source: The Alternative Dutch Dictionary )
By Sjoerd Visscher at Thu, 2005-08-11 15:30 | login or register to post comments
Bueno ...
we've had Brainfuck for some number of years now. Not to mention Linda, named after a porn star. I'm sure there are a few other (as yet undiscovered) dirty jokes and double entendres out there....
By Scott Johnson at Thu, 2005-08-11 21:45 | login or register to post comments
-30 By William D. Neumann at Fri, 2005-08-12 14:19 | login or register to post comments
Stack-based is point-free?
If you represent a stack as nested pairs, you can treat Forth words as unary functions of the form stack new_stack. Integer constants would have the type a. a a Int, and the arithmetic functions like +, -, etc., would be typed a. a Int Int a Int. With a type system, you can appeal to polymorphism to guarantee that functions won't interfere with elements beyond a certain depth in the stack. It's pretty easy to model this in a functional language (note: this is not a claim of superiority/inferiority). Here's a GHCi session: Prelude> let k = flip (,) Prelude> let lift2 f ((s,a),b) = (s, fab) Prelude> let plus = lift2 (+) Prelude> :t plus plus :: ((a, Integer), Integer) -> (a, Integer) Prelude> :t plus . ms plus . plus :: (((a, Integer), Integer), Integer) -> (a, Integer) Prelude> :t plus . k 4 plus . k 4 :: (a, Integer) -> (a, Integer) Prelude> :t plus . k 4 . k 2 plus . k 4 . k 2 :: a -> (a, Integer) Prelude> snd . plus . k 4 . k 2 $ () 6 What's interesting is that arrows in Haskell are defined in a point-free style, along with a special syntax involving local names which is translated into the point-free style. Por lo tanto, proc x -> y <- f -< x z <- g -< x returnA -< y + z se convierte en arr (\ x -> (x, x)) >>> first f >>> arr (\ (y, x) -> (x, y)) >>> first g >>> arr (\ (z, y) -> y + z)) which consists primarily of composition and "stack" manipulation. (Arrows pass multiple values around with nested tuples, but they don't prefer one side or the other, so can be more tree-like than stack-like.)
By Dave Menendez at Thu, 2005-08-11 23:55 | login or register to post comments
Joy in Haskell
Things get a little interesting when you get to the recursion combinators, but with a dash of typesystem wizardary , you can write most any Joy combinator in Haskell.
By Greg Buchholz at Fri, 2005-08-12 01:24 | login or register to post comments
The lambda calculus is the "internal language" of a cartesian closed category. This means that every type in the lambda calculus corresponds to an object of a CCC, and that every term corresponds to a morphism. In turn, this means that you can convert any program in the lambda calculus into an equivalent program consisting entirely of a composition of combinators (and you can do it in O(1) in either direction). If you like, you can think of the types of the set of free variables of a term as the type of the "input stack", and the return type as the type of the "output stack", which means that there is no fundamental expressivity gain to be had in shifting between a combinatory language and the lambda calculus. I think that being able to name variables explicitly is a genuine advantage, but obviously, mileage varies.
By neelk at Fri, 2005-08-12 19:32 | login or register to post comments