Biep scripsit: > >> Barely possible is "splicing in" multiple results > > > The idea of using splicing is interesting, though. > > So it seems we come to the same conclusion as to where that boundary lies. It's essentially a syntactic variant of something I proposed for WG1, to allow multiple producers in call-with-values. This was voted down, but may still appear in WG2. It encapsulates the idea of evaluating several expressions for all their values, concatenating the values, and applying the consumer to them, but if treated as primitive may be implemented very efficiently. > My point was more fundamental than applied: I think they possibly > ought to be decoupled. Symbols are a combination of two orthogonal > ideas, uniqueness and name - and I think it is always better to have > one construct for one idea. How this decoupling would pay off was > the remainder. Well, if you want anonymous unique objects, nothing is simpler: (define-record-type unique (make-unique) unique?) And immutable names without uniqueness are also easy: (define-record-type name (make-name string) name? (string raw-name-string)) (define (name->string name) (string-copy (raw-name-string name))) Symbols are really a run-time type in Scheme, unlike Common Lisp where they do double duty at compile-time and at run-time. So if you don't like something about them (if you want gensyms or property lists, for instance), you can easily avoid them in favor of your own types. > >> This would lead to mutable first-class environments. > > > > WG1 went down this rathole in its earliest days. > > Did it keep the identifiers hygienic too, as I want? I don't know that we got that far. > I do not propose changing the inheritance structure of a closure after > it has been made - that would break lexical transparency. But besides > 'lambda', other closure-generating constructs might exist that use > other inheritance schemes, and such constructs could be codable in > Scheme without adding indirection or even interpreter levels. If you can say what you want that dynamic-wind and parameters (which are two faces of the same thing) provide, then I'd very much like to hear about it. > But one that is paramount in the notion of Schemeyness - to me, > at least. (I admit being a REPL fan.) ML requires a letrec for > each indirectly recursive function, which makes the language feel > very different. That goes well with the static typing, but wouldn't > go well with Scheme. R7RS allows variables to be redefined at the REPL with retroactive effect on lambdas referring to them, but changes to syntax definitions are not retroactive. For the same reasons, syntax may not be used before it is defined. I have sent you separately my allegory of Professor Simpleton and Dr. Hardcase. For anyone else still reading, it's at http://lists.r6rs.org/pipermail/r6rs-discuss/2009-September/005487.html . > I think code that can be read roughly linearly is easier to understand. > Multi-pass means you can't understand your code on your first reading - > if you could, so could the compiler. Agreed. > I definitely don't want another CL macro system, but I also think > syntax-rules was a mistake. Not bad in itself, but definitely not > Scheme, so suddenly we had several constructs with restrictions on > them - contrary to the Clinger manifest. Well, the primitives of the syntax-case system (`syntax-case` itself is not primitive) are quite Scheme in flavor, once you have grasped the separation between S-expressions and syntax expressions. > But more importantly, I think the language shouldn't prescribe how I > write my macros, even if what it prescribes is the best way, as long > as my way doesn't break important properties without me realising. In that case you definitely want the syntax-case system. But note that any system that allows the execution of Scheme code at macro expansion time requires the extra complexities of phasing, explicit or implicit. > I think there is something t&j about it, at least. Lisp didn't start > out that way, of course - it had dynamic scoping, for one thing, and > PROG with labels.. I do think that Scheme's core syntax is t&j. I was talking about its primitive procedures. > My hope was, and is again (it was lost during the R6RS debacle) that > Scheme can remain a language striving in that direction - we already > have enough languages striving in other directions. I wish the Clinger > manifest would be procedurally implemented in the sense that having > more concepts, or having restrictions on concepts, would require a > strong justification in a companion document to the standard - even > if these things were taken over from previous standards. The core things we have added were modules (mandated by the Steering Committee), user-defined types (the requirement for which is obvious), bytevectors (the result of splitting strings into octet and character containers), binary I/O (for the same reason), and Unicode (though only a subset of characters need to be supported, those that are provided must be correctly supported). Everything else is about convenience, completeness, or correctness. > > What is more, it's not clear that standardizing a facade and not > > its underlying layers is the Right Thing. > > I am not sure what you are referring to here. Which façade and which > underlying layers do you distinguish in my text? I was recapitulating, perhaps too briefly, an argument frequently made, that a facility cannot be provided by WG2 unless it can be implemented in the WG1 language. Thus, to provide TCP support in WG2 it would be necessary to provide an FFI in WG1. I consider this to be a fallacy. > Nowadays it is perfectly feasible to build a cell layer underneath > the language, lambda's compiling into cell-layer code that happens to > assign mutable cells to all variables. Indeed, some compilers do just that under the name of assignment conversion. But other Scheme systems do *not*, and exactly because no underlying level is exposed, they are free to do so. > A fraction more compilation time, the same resulting code, but the > option to code directly in the cell layer and get more efficient code > with less possibility of error. Or less efficient code, depending on the implementation. Precisely because lexical variables are not first-class, it is possible to reason about them in ways that are not possible for first-class objects, and to potentially optimize their implementation. > The upper case ones are cleaner, requiring a smaller spec > (e.g. set! becomes a procedure, set-car! and its ilk are no longer > needed, (set! (car ..)) sufficing), so this is clearly a language > clean-up that doesn't break existing programs but removes a restriction. > Fewer concepts + fewer restrictions --> Clinger-proof. Clinger-proof + > leading to faster code --> a Good Thing. Not unambiguously, and this has been my underlying theme throughout much of this posting. > But compilers aren't. The presence of locations doesn't preclude any > of the existing approaches, but it allows a low-level approach by the > compiler. After all, it is the compiler that will write most parallel > code, not the programmer. Wherever (in-)variables are immutable, > the compiler can parallise to its CPU's content, but when locations > pop up that is a warning sign. Whenever it can prove that no clash > occurs, it can still write parallel code, and where it cannot prove it > it can impose a locking mechanism (with priority to avoid deadlock: > in case of doubt, evaluate left-to-right or whatever). Whenever the > programmer knows better, she can use higher-level constructs and > relieve the compiler of its responsibilities. While this argument is sound, it seems to be that it argues against your case. Detecting which variables are modified is trivial by comparison with the analysis required to see which uses of objects are safe and which not. > >> the value of load would then be > >> the value of the last expression in the file loaded. > > > it can't now be standardized. > > May I ask why? Because (and this is very important) standardization of existing constructs is mostly a matter of finding common ground. The `load` procedure has always been very loosely defined by the Scheme reports; it is practically unchanged since R2RS. As such, implementations have gone their own way with it. IMHO it is mostly obsolete in R7RS given `include` and `import`. > Scheme currently suffers from macroitis: once something becomes a macro, > other things may get infected. Imagine Scheme only provided 2-argument > logical macros, and I want to write a function that takes one of them > and returns an arbitrary-arity version - a fold. This is perfectly possible in syntax-case. > I am strongly pushed towards writing not a function, but a macro > for that You cannot write a function *or* a macro that maps a syntax-rules macro to another macro, except by implementation-dependent means. You can write a function-for-syntax that maps a macro to another macro. > Any chance of them allowing the user to build strings (without her > building an interpreter for them, obviously)? I'm not sure I understand this. > Well, I suppose by now you are very sorry you answered me.. By no means. -- John Cowan cowan@ccil.org http://www.ccil.org/~cowan Humpty Dump Dublin squeaks through his norse Humpty Dump Dublin hath a horrible vorse But for all his kinks English / And his irismanx brogues Humpty Dump Dublin's grandada of all rogues. --Cousin James _______________________________________________ Scheme-reports mailing list Scheme-reports@scheme-reports.org http://lists.scheme-reports.org/cgi-bin/mailman/listinfo/scheme-reports