The value of purely functional programming languages, as opposed to functional programming languages like lisps, is that you get referential transparency, which means that when you define `a = b`, you know that you can always replace any instance of `a` with `b` and get the same answer. This is a very natural property in mathematics (algebraic rewritings are basically just this property writ large) and so it helps to draw nice parallels between the familiar notation of functions from mathematics and the "new" and "confusing" notion of functions in functional programming and other declarative languages.
As other posters have said, strong typing is also a nice property for lots of reasons, most notably it gives a platform to talk about ad-hoc and parametric polymorphism.
(I lecture on Functional Programming at the University of Warwick, where we use Haskell.)
The first problem with this argument is that referential transparency is a property of syntactic positions, not of languages.
The second is that languages like Lisp, SML, C, Pascal and BASIC all have referentially transparent and referentially opaque positions in exactly the same way that languages like Haskell do.
This means that all these languages enjoy referential transparency in the same way, because when you unpack the notion of equivalence, referential transparency itself is within a whisker of being a tautology: if a is equivalent to b, then you can substitute a for b or b for a. The relevant sense for "is equivalent to" can really only be contextual equivalence, which is all about meaning-preserving substitutability.
That said, not having to reason about effects within one's program equivalence sure makes things simpler in a pedagogical setting. But that's not to do with referential transparency per se.
For those of us who are unfamiliar with Lisps, can you expand on how they break referential transparency (and how Standard ML contrasts in that regard)?
more importantly there are functions (using scheme as an example) like set! and set-cdr! that mutate existing values and totally break referential transparency.
this isn't just user facing - for example let* kind of depends on creating bindings up front so they work across clauses, and then mutating them afterwards
let* permits expressions on the right refer to arbitrary other symbols bound by the let*. in particular it allows for construction of recursive lambdas that may not be linearlizable.
> let* permits expressions on the right refer to arbitrary other symbols bound by the let*
In what language? I just checked Elisp, SBCL, and Guile, and they all error out if you refer to a variable not previously defined by a left-to-right traversal of the varlist:
(let* ((a (+ b 1)) (b 1)) a)
Edit: This doesn't work either:
(let* ((a (lambda () (+ b 1))) (b 1)) (funcall a)) ; (funcall a) -> (a) for Schemes
Ah, that does look to be the case. I didn't know about that one.
Signature
(letrec BINDERS &rest BODY)
Documentation
Bind variables according to BINDERS then eval BODY.
The value of the last form in BODY is returned.
Each element of BINDERS is a list (SYMBOL VALUEFORM) that binds
SYMBOL to the value of VALUEFORM.
The main difference between this macro and let/let* is that
all symbols are bound before any of the VALUEFORMs are evalled.
Lisp allows you to mutate, you can certainly write non-mutating code in lisp. Why do you think you need to use mutation with let*? let* is just a sequential let
As other posters have said, strong typing is also a nice property for lots of reasons, most notably it gives a platform to talk about ad-hoc and parametric polymorphism.
(I lecture on Functional Programming at the University of Warwick, where we use Haskell.)