Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Smalltalk simplicity and consistency vs. other languages (2022) [video] (youtube.com)
121 points by harryvederci on Jan 14, 2024 | hide | past | favorite | 117 comments


Interesting talk. Smalltalk (like Lisp) has indeed a very simple (i.e. minimal) syntax. But that doesn't necessarily mean that it is also simple to write and read code, or to implement an efficient compiler. E.g. blocks - as great as they are - make inlining very difficult; it eventually took 30 years to implement a VM with decent performance (see http://software.rochus-keller.ch/are-we-fast-yet_crystal_lua... for the results of https://github.com/rochus-keller/Som/ and https://github.com/rochus-keller/Smalltalk/ compared to Cog/OpenSmalltalk). Another language explicitly designed for simplicity is e.g. Oberon, which is statically typed and easier to implement with decent performance.

In my experience, at the end of the day, it's more the familiarity with a language that makes it seem "simple". And there is a trade-off between "simplicity" of the language and "simplicity" of the solution; if the language is too minimal, the solution quickly becomes confusing instead, and vice versa.


Self wasn't sufficiently fast? Craig Chambers's compiler [0] had to be careful around recursive inlining* but it's not that bad, and the compiler definitely came around in less than 30 years of Self or Smalltalk existing (even if you start counting from Smalltalk-72).

A sibling comment mentions that Smalltalk-80 played tricks with inlining some block methods such as ifTrue: and whileTrue: in the bytecode, but Self pulled nothing like that.

[0] http://www.wolczko.com/tmp/ChambersThesis.pdf#page=72

*Although maybe too careful - the important parts are that you don't inline forever, so some degree of recursive inlining is okay (and would help in the factorial example), and that you have a good reason to inline.


> Self wasn't sufficiently fast?

Self apparently is sufficiently different from Smalltalk-80 so that even today JS engines (which are descendants of the Self engine) are still significanly faster than Cog/OpenSmalltalk. Remember that also Python is known to be pretty resistant to performance improvements. So some languages/VMs apparently are more amenable than others. In my referenced report you can see that there is still nearly a factor two between Pharo (using OpenSmalltalk) and Node.js (using V8). But Pharo is at least faster than LuaJIT, which again is about factor 220 faster than a Smalltalk interpreter which does the caching and inlining described in the Bluebook. So Cog/OpenSmalltalk went a long way to finally be the fastest Smalltalk engine around. A comparison with Smalltalk 72 makes little sense because it was a completely different language and engine.

> Smalltalk-80 played tricks with inlining some block methods such as ifTrue: and whileTrue:

That was just because a convention on lexical level made it possible. Otherwise it's very difficult because a block is just an object and executing it is just a method call on this object via dynamic dispatch; it's not trivial to find out what to inline. If you're interested in the machinery behind the scenes, here are a couple of tools which facilitate an analysis: https://github.com/rochus-keller/Smalltalk.


> So some languages/VMs apparently are more amenable than others.

I thought Self was supposed to be harder to optimise due to using prototypes, accessor methods rather than distinguishing instance variable access, and generally fewer other tricks. Frankly I don't know what the problem is here, but if it was a language issue, then Self should be slower than Smalltalk - you are suggesting that it'd be the other way around, and I don't see why that'd be the case.

Mario Wolczko implemented Smalltalk in Self, yielding a faster Smalltalk than the commercial offerings <http://www.merlintec.com/download/mario.pdf>. The are-we-fast-yet repo has benchmarks for SOMns (SOM modified to run Newspeak, using Truffle for compilation) which outperforms Node <https://github.com/smarr/are-we-fast-yet/blob/master/docs/pe...>.

> Otherwise it's very difficult because a block is just an object and executing it is just a method call on this object via dynamic dispatch; it's not trivial to find out what to inline

We have an object which we know everything about, because we construct it in the method. We know it's a block, and we know what #value will do, which seems like a pretty strong signal to inline. Even without knowing what blocks are, inlining #ifTrue:, #whileTrue: and such is a good move as those methods are very small.


> you are suggesting that it'd be the other way around, and I don't see why that'd be the case.

That's just what I observed, and I concluded from the observations that Smalltalk - like Python - must be more difficult to JIT compile than Self or JS for some reason. I asked Stefan Marr for his most recent measurement results, and he was so friendly to post some reports: https://github.com/smarr/are-we-fast-yet/issues/85. As it seems the Graal/Truffle based implementation of SOM (a Smalltalk dialect) is now as fast as Node.js (V8), at least the AST version on the Graal enterprise edition; the Graal community editions of the SOM VMs are still slower though (about the same performance as Cog/OpenSmalltalk by estimate).


SOM stands for "Simple Object Machine". It is not a high performance implementation, nor is it intended as one.

Subhead: "A minimal Smalltalk for teaching of and research on Virtual Machines."

Key characteristics:

"clarity of implementation over absolute performance"

Straight from the SOM home page:

http://som-st.github.io


Thanks. I know SOM and Smalltalk pretty well and have implemented several VMs myself (see https://github.com/rochus-keller/Som/ and https://github.com/rochus-keller/Smalltalk/). SOM was originally designed and implemented by Bak and Lund at Aarhus university. It's very well suited for performance experiments and shares with Smalltalk the relevant features. There is an implementation of the Are-we-fast-yet suite for both SOM and Smalltalk.


I find that hard to believe given the various claims you’ve made.

If it’s true, it makes those claims even more staggering.

“Both SOM and Smalltalk” is a category error.


"Be kind. Don't be snarky. Converse curiously; don't cross-examine."


> "yielding a faster Smalltalk than the commercial offerings"

fwiw "However, there are a few limitations which would preclude its use as an industrial-strength implementation" p13


> JS engines (which are descendants of the Self engine) are still significanly faster than Cog/OpenSmalltalk.

This is almost entirely a function of effort expended.

Several trillion dollar companies have large teams working on JavaScript engines, usually multiple engines each. The OpenSmalltalk VM is a volunteer effort of a very small community, with the main JIT produced primarily by a single person.

"The F4 Phantom...America's proof to the world that with a big enough engine, even a brick can fly"

https://www.youtube.com/watch?v=XqUgUgiToNs

Fabio Niephaus hooked up Squeak to the GraalVM.

https://github.com/hpi-swa/trufflesqueak

Takes a while for the optimizer to reach a steady-state, but does it ever fly when it does. The Bouncing Atoms demo reaches around 10x the frame rate of the OpenSmalltalk VM.

https://www.researchgate.net/publication/336086216_GraalSque...


Thanks. See also https://news.ycombinator.com/item?id=39007778.

The first compiler which introduced the concepts later used in Java and JS was also an effort by only one or two people in an academic setting. Even the V8 in its initial version (which already did a tremendous speed-up of JS) was essentially the work of two people. Cog/OpenSmalltalk was supported by many PhD projects, so it's by far not only a single person. Apparently there was a need for a fast Smalltalk VM, as there is a need for a fast, compatible Python VM. As it seems it took 40 years of research to make Smalltalk fast; Python got a bit faster, but is still much slower than JS, even with Graal.


Really, no.

I know a lot of these people personally. Heck, I theoretically share(d) an office with one. And of course the way that Self's technology made it to Java's HotSpot, which is still tons faster than any JS, was via Anamorphic and Strongtalk.

https://en.wikipedia.org/wiki/HotSpot_(virtual_machine)

https://strongtalk.org

https://en.wikipedia.org/wiki/Strongtalk

https://gist.github.com/landonf/9053062

But of course Smalltalk was jitted long before that, with Peter Deutsch's VM.

https://dl.acm.org/doi/10.1145/800017.800542

Higher performance VM/compiler technology was not enabled by Self's object model, it was necessitated by it. The system was just too slow otherwise.

Squeak was never intended as a high performance Smalltalk implementation, it had different goals. When Squeak was created, an automatically generated bytecode VM was already tons faster than the fastest high-performance Smalltalk computers the creators had worked with at PARC, in particular the Dorado. This was considered more than fast enough for what they were trying to do, in particular because the bulk multi-media processing would be done by plugins (also often generated). Making a JIT for better performance was a non-goal.

So using Squeak (and later OpenSmalltalk) as a comparison is just completely invalid.

The Cog VM was started around 15 years after that, by a Smalltalk VM engineer who had already worked on jitted VMs professionally.

http://www.mirandabanda.org/cogblog/microbio/

And so using the Cog VM as some sort of benchmark as the "first fast" Smalltalk implementation is completely unrelated to reality.


I don't understand in what respect this should contradict my statements. I answered to the argument "Several trillion dollar companies have large teams working on JavaScript engines, usually multiple engines each. The OpenSmalltalk VM is a volunteer effort of a very small community, with the main JIT produced primarily by a single person." and demonstrated that major performance breakthroughs were realized by small teams, in case of JS as we know by Lars Bak (who also was at Longview Tech) and Kasper Lund. Of course there were other attempts to make Smalltalk faster, but I concentrated on the ones really making a difference. I think the topic has been sufficiently discussed.


Yes, it is obvious that you don’t understand. Not sure how that is possible, but here we are.

Your claim that Smalltalk is somehow more difficult to optimize than JS or Self is patently false.

The fact that you get different results from large teams of super smart people with virtually unlimited resources than from small bands of volunteers is in no way contradicted by individuals making breakthroughs. Those individuals often work at those multi-trillion companies.

Using either SOM or Squeak/Pharo as comparisons for VM performance is ridiculous.

‘Nuff said.


I don't feed trolls, I'm out.


> Remember that also Python is known to be pretty resistant to performance improvements.

People certainly say that a lot. This thread for example:

https://news.ycombinator.com/item?id=38923741

> The take that CPython should be a reference implementation and thus slow always aggravated me

What is the actual reason for that? How do other languages avoid falling into that trap?

That thread gave me the impression the source of the problem is direct linkage to C extensions instead of native high level foreign interfaces. The resulting code is too tightly coupled to CPython internals which prevents optimization. Is this right?


> What is the actual reason for that?

Don't know, can only speculate. From what I observed, I would conclude that no one has had the right idea so far of how to do this. With Self it took a few years, with Smalltalk - as has now been shown - forty years, and with the surprise that AST interpreters with Graal are faster than bytecode interpreters. Someone will probably have a brilliant idea for Python in a few years' time.


> But that doesn't necessarily mean that it is also simple to write and read code

Right, having several friends in the smalltalk/Squeak scene I tried "obvious" things that required to know the patterns of, for example, Morphic framework [1]. I understand that if you include a good development wizard/assistant would significantly improve the learning curve.

[1] https://wiki.squeak.org/squeak/morphic


Funny, I wrote that two days ago, here it says three hours.


Second chance pool. When a post is on there, it will may get its timestamp updated to the current time and will be tracked as if it was submitted at that time. All existing comments will get that same timestamp for a while. The fake timestamp helps to game the ranking so it goes to the front page for its second chance at discussion and votes. Eventually the real timestamp is restored.

https://news.ycombinator.com/pool


Seeing an article about Smalltalk with Hernan presenting so up in HN is nice.

I worked with Hernan a while ago. We worked in a financial system with very cool things: an OO database, a workflow system that can work locally or remotely transparently, a good model for units and dates, tons of custom tools in the IDE (something that is only possible with Smalltalk IDEs) that allowed us to automate unit test generation, versioning, and database migrations. The last time I saw that level of productivity in any IDE was then.

However, Smalltalk is not perfect, and when I see these presentations, I cannot stop thinking that a more balanced discussion is needed.

The presentation shows how Smalltalk blocks are uniform with the if/then/while syntax. But that's only part of the history. To support blocks that way, the compiler optimizes those cases and inlines the block unless you do something else. So, while assigning the block to a variable looks the same, it's not executed similarly. That's the beauty of hiding the implementation details. But, sometimes, those details bite you. Most St programmers avoid creating custom "ifThen:else" messages because, in most VMs, that will break.

In other words, like any design decision, the simplicity of the "object message" syntax has trade-offs, too. The performance trade-off is fixed by having exceptional cases hidden from most programmers. The other trade-off is the need to always dispatch to a receiver. Compared to Lisp or other functional languages, the need for a receiver dramatically impacts how you design and structure your program. Many things that you can resolve with a polymorphic function need to spread across classes in St. And that introduces the problem of sharing the implementation: class hierarchies, traits, or composition. Resolving those problems with a simple function is refreshing and falls under the statement at the beginning of the talk: the programming language affects how you think about the problem. When I look at the code I wrote back then in Smalltalk or Java, I cannot stop thinking about how the restriction of using classes and methods adds unnecessary complexity.


I'm a little confused by the receiver thing. Is the idea that each object has a dictionary mapping function names to code and sending a message to an object means execute whatever was found in that dictionary?

So the example of { a < 3 } while involves evaluating a block, getting a boolean, then calling the while method on that boolean?

That would seem to have a bad time with things like compare. Neither argument is special, you end up implementing a lot of compare-with methods. I.e. the expression problem. Is that understanding broadly correct?


Pretty much although it doesn't end up being quite as bad as it sounds.

For example, your code wants to compare a string with some other object, if the other object understands 'string' then the built in method for comparing two strings can work. Otherwise you get a #doesNotUnderstand exception.

Making arbitrary comparisons is a bit silly though, so in practice it doesn't happen very often.


> Is the idea that each object has a dictionary mapping function names to code and sending a message to an object means execute whatever was found in that dictionary?

Isn't that how Python, Ruby, JS etc. work under the hood anyway? And yes it's quite predictably a drag on performance.


If one needs to run Squeak in a very minimum VM . Is it possible? Any help with that?

I tried to make vm in Squeak turning off graphics/sound and compiled vm but it just didn’t work when I was trying to run it.

Is there a relatively simple way to remove all fancy objects and keep only bare minimum that would at least run it so I can learn it further?

It has to be VM that gives repl through terminal and ability to modify image further, vm need to run without breaking if graphics/sound not available when even linux drivers aren’t available, on pi zero w let’s say.

With my current limitations I cannot dive into learning the language deeply in any other different setup.

Is it possible? I am sure someone have already done that somewhere. I am really trying to avoid dealing with c++ but the way vm is done it seems that instead of learning Squeak I have to dive into all the ‘beauty’ of compiling c with cmake instead.


Story time. When I was a young developer in a database company, my cubicle was next to a section where a team sat. The team consisted 30 to 40 people working on a secret project. Whenever I asked them what they were working on, they said it’s a secret next gen technology. They spoke in secret code and big terms. The only thing I knew was they were using Smalltalk as the language as they weren’t shy in speaking beamingly about it. They work on these powerful workstations with large monitors, a luxury at the time, as the Smalltalk dev studio required them. I thought these were god developers, working with advanced technologies in a covet skunk project.

The project went on for couple years. Nobody knew what they were doing, not their goals, their product plan, their features, nor their designs. The only thing we knew was they were using Smalltalk.

The high-up forced them to give a demo eventually. The talk was good, on Smalltalk, on OO, and on the landscape of the technologies. When the demo started, it showed a half finished login dialog box but nothing else. They said something about unable to package the code and do a release. Understandably software releases were a big bang process back then. But what about prior releases? Nothing. They didn’t have any finished builds. The talk continued.

The project got canceled couple months later.


Smalltalk was always good for rapidly developing GUIs, so whatever the reason was that they only had a half-finished login box to demo, it probably wasn’t Smalltalk’s fault.

Rapid development tools can’t help you if you don’t know what to build.


My guess is the secrecy of the project was the main contributing factor to its failure. There were no vetting on the products, feature, design, development process, technologies, language choice, etc. Too many people read Skunk Works and thought they were Lockheed Martin.

The language choice probably contributed somewhat. Smalltalk used to have problems with source version control and configuration management. It's difficult for multiple people to work on the same codebase. Merging change was difficult as the individual development environments were modified. The rapid prototyping was probably a false lead as it lured decision makers into thinking lower development cost.

By and large when the only thing about the project you can talk about is the language used, it is probably doomed to fail.


I have dabbled with Pharo smalltalk a bit and got a similar feeling.

The core "thing" (pharo it self) works well enough, other stuff you'll likely have to write on your own (batteries are not included).

Also, multi-threading seems to be non-existant? No FOSS implementation does that AFAIK (proprietary implementations are irrelevant).

I see the nicety of the language, but I'm not convinced (at all) by the implementation. Oh and Pharo seems to be the most advanced implementation of Smalltalk?

It really doesn't look practical at all.

But then again, the video starts and the guy is kinda deriding some Java or C++ code but then you move forward (eg: 8:31) and the speaker has a whole display filled with unrelated things and has to write code in a small text area, viewing like 4 lines at the time? Thanks, i'll pass...


Generally if your method in Smalltalk is 4 lines you're doing something wrong. Most good Smalltalk ends up being only 1 or 2 lines at the most, which is why the "coding" window always looked weird and small compared to contemporary languages.


I’ll be honest this looks like an arbitrarily made up rule and stinks like bs.


Your "has to write code in a small text area, viewing like 4 lines at the time" is a silly exaggeration.

The text area is likely resizable and scrollable.

Their "if your method in Smalltalk is 4 lines you're doing something wrong" is a silly exaggeration.

'Smalltalk "programs" are ensembles of objects, distributed throughout the class hierarchy, that work together by means of message passing produce computations.'

Individual methods tend to be small and without a single large program text beginners had problems finding "the program".


And this moment is why smalltalk is not for everybody


Before FOSS Smalltalk was considered to be batteries included ;-)

Smalltalk has been practical in past decades, that doesn't mean it's practical for you in this decade.

https://dl.acm.org/doi/pdf/10.1145/226239.226264


> Before FOSS Smalltalk was considered to be batteries included ;-)

The bar was so low back then :)


Yes, they called it programming not downloading :-)


Was the project called Jabberwocky by any chance? https://www.youtube.com/watch?v=spyJ5yxTfas


You know what's funny. In one of the startups, we did bring in a TV crew to shoot commercials on us working. The commercials were aired before the product was half way ready. In one of the product demos in a trade show, the CEO was clicking on the software and I was sitting behind the stage inserting rows into the database on cue to create the illusion that the product was working since it was way before alpha.


I hope it was a beautifully designed login screen at least


Worth watching even if you skip to the last couple of minutes, where he tests a Game of Life scenario in a way that is not comparable to anything I've seen done with any other programming language.

This power that Smalltalk systems have where the code runs in a GUI that is also the editor/debugger/etc has deeply fascinated me recently.

At the same time it makes me kind of sad, because it feels like we're all stuck writing code with a hand tied to our back. Almost makes me think that there's a happy alternative coding universe where the Smalltalk way is the default way, and a couple of weirdos are using separate IDEs/editors and restart their program on every change, or think they're smart if they're running something as primitive as a REPL.

The reason why I'm not just diving in and never looking back is that it seems like such a big investment, and that I'm so comfy with my (very) extended Neovim setup. And I'd like to be able to create applications that run without shipping the entire Smalltalk VM. And I'd like to actually understand a tool that I'd have to dive into that deeply, and I think I'll never have the time to truly understand all of the VM, the classes, etc.

Maybe instead, I should just create my own language that contains its own code editor. Seems like almost the same time investment, but maybe more satisfying?

Sigh...

Anyway, here is some other related interesting material:

- Bret Victor - Inventing on Principle (Always funny to hear the crowd go nuts when he does the live game editing time thingy.)

- Pretty much any Alan Kay video.

- Dan Ingalls his "Lively Kernel", which seems to me like it's pretty much the same as a Smalltalk system, but where you code in JavaScript, and it runs in the browser.


> This power that Smalltalk systems have where the code runs in a GUI that is also the editor/debugger/etc has deeply fascinated me recently.

Have you tried emacs?

> And I'd like to actually understand a tool that I'd have to dive into that deeply, and I think I'll never have the time to truly understand all of the VM, the classes, etc.

I've recently tried to do that myself with Smalltalk via the Glamorous Toolkit[1] (a beautiful, modern Smalltalk environment based on Pharo). Because the programming environment itself comes with a Book teaching it, you can basically just read it as a normal digital book, but with the superpower that everything is editable and interactive: you can change the book itself, every code example is runnable and you can inspect the result objects right there, change it, modify the view for it... they say it's "moldable development" because you almost literally mold the environment as you write your code and learn about the platform.

> And I'd like to be able to create applications that run without shipping the entire Smalltalk VM.

That's why even though I really enjoyed SmallTalk, I can't really see it as anything more than a curiosity. I tried using it at least for my own occasional data exploration because it has good visualisation capabilities and super easy to use HTTP client/JSON parser etc., but the system is so heavy (1GB+ of RAM) that I couldn't justify keeping it open all the time like I do with emacs, on the offchance that I might need to use it for some small task.

Anyway, perhaps that's something you might be interested in.

[1] https://gtoolkit.com/


> Have you tried emacs?

Here we go ...


This isn't a vim vs emacs debate. We're talking about smalltalk and what do you know! emacs was made to resemble the big expensive lisp machines which were in turn made in the image of smalltalk but with lisp as the language of choice.

With this in mind, recommending emacs when someone states that they are interested in a system where the "code runs in a GUI that is also the editor/debugger/etc" does not seem out of place at all to me.

Pharo can also be recommended but its a lot more resource hungry and with a mouse centric workflow compared to emacs which is keyboard focused.


There are not just "lisp machines", but interactive Lisp implementations, coming with an IDE. The first of those were in the 60s, using text/terminal interfaces. The first Lisp already had interactive interface and end 60s "BBN Lisp" had a full-blown resident development environment, also using "images". In the 70s Smalltalk was developed (providing images, garbage collection, ... like Lisp before), then also on dedicated machines - at the same time frame Lisp was put onto these machines. BBN Lisp was then morphed Interlisp-D by Xerox, which was "Interlisp" ported to the Xerox D machines, running on the metal with a GUI added.

The Interlisp timeline gives an idea, that a lot of research into interactive development took place in the 60s and 70s, using it.

https://interlisp.org/history/timeline/

GNU Emacs is based on a Lisp system, but the focus was (is) to build an user-programmable, extensible editor, not to resemble a "lisp machine", which was a computer with an operating-system.


I wasn't commenting on Emacs vs Vim. My comment was regarding the meme-like nature of Emacs often being suggested for almost anything, almost "the Simpsons did it"-esque. My experience with Emacs is that it is a giant kluge, whereas Smalltalk and its kind are much more streamlined and simple. And Emacs is an editor driven and configured by a language. Smalltalk is a language that comes with an editor. They're different, and I don't think anyone is going to get a Smalltalk experience just working in Emacs.


> My experience with Emacs is that it is a giant kluge, whereas Smalltalk and its kind are much more streamlined and simple.

Yeah, and guess which one is *actually* widely used and still relevant today?


Neither, and for wildly different reasons.


This is correct. After trying out Smalltalk, I realised that Emacs has very much the same design: a minimal interpreted language which is used to implement a large program, where you can inspect and modify the program at any time. The major difference is that Emacs isn’t image-based.


I think this split is way overstated.

Consider the case where you fundamentally modify the invariants of an object. All you existing instances might now be in some inconsistent state. One way to avoid this is to enforce some unity of code and state. If we instead of modifying all the existing instances just modify instances created from this point forwards, we don't risk messing with the invariants.

Well now we have a standard unix model. The executables are "classes" that get "instantiated" (exec'd) into "instances" or processes.


As addedum, this experience was also available across the other Xerox PARC workstations, namely Interlisp-D, Mesa (XDE), Mesa/Cedar.


> And I'd like to be able to create applications that run without shipping the entire Smalltalk VM.

There's always a javascript vm... https://squeak.js.org/


Python notebooks such as https://marimo.io/ seems to provide many of the same benefits


> without shipping the entire Smalltalk VM

Are you OK shipping the entire JVM?


I was excited in Smalltalk around 1999, during my Master Thesis. The sad story is Java Hotspot had more technology than ParcplaceDigitalk(Cincom) and kept growing. I love Smalltalk but Java won, even when it was closed sources. Try Squeak (or Cuius): it is beautiful,but sadly it has little traction.


Java didn't win, you are still free to code in whatever language you are comfortable in. At least until the great big Jenga tower stack of modern-day dependencies and API's your language relies on eventually stop working, which is the real killer, not how good or bad your chosen language is.


Smalltalk technology acquired by Sun lives on Hotspot, while IBM's Smalltalk technology lives on Eclipse and J9, and Rogue Wave had one of the first collection frameworks.

It doesn't help when Smalltalk vendors were the first to jump into Java.


Small clarification: the Smalltalk technology absorbed by HotSpot was StrongTalk's static type-based optimization techniques combined with dynamic recompilation (what we now moniker as a JIT) originating from research done with Self (which was also part of Sun at the time).


I wonder if someone has considered building a modern variant of Smalltalk on top of elixir/erlang.

You’d get virtualization, multi threading, scalability, etc.

And consider git, DevOps, CI/CD and off we go?


Languages, and about languages, on the BEAM: https://github.com/llaisdy/beam_languages

PS: You might also find this interesting : https://www.grisp.org/


GNU Smalltalk is nice for writing small scripts, has TCL-like feel to it, while being a "real" programming language.


Tcl at least is still being developed, and it has a Lisp-like feel, it is just like Python, a scripting language to libraries written in C, C++.

As someone that used real Smalltalk (Smalltalk/V), I could never get GNU Smalltalk.


Gnu Smalltalk is for writing scripts, Smalltalk in name only. It s far closer to lisp than tcl, and faster than tcl too.


So I decided to update my decades old outdated knowledge about GNU Smalltalk.

Last update in documentation, 2017.

The JIT compiler seems to still be experimental, and no new releases in 8 years.

It seems that in both cases, for a Lisp like experience, we are better off using SBCL or similar.


Smalltalk is homoiconic, like lisp. TCL is not, just pretends to be.


Homoiconic refers to parts of a program (such as functions) being stored in source code form (or some tokenized form close to source code). This was coined in the TRAC project in the 1960's. homo = same, icon = representation.

The POSIX shell is homoiconic because you can list the function definitions, and copy and paste them to redefine them. (Similarly to how the TRAC language supposedly allowed definitions to be recalled an edited.)

ANSI Common Lisp has an optional feature for obtaining the definition of a function in textual form. For that to work, the implementation has to waste space retaining that info when a function is compiled.

Related to that feature is a function called ed for editing a function with a resident editor.

Compiled Lisps that don't retain the function source code are definitely not homoiconic.

That source-code-being-data property of Lisp with application-defined code manipulation before code is interpreted or compiled is something other than "homoiconic".


Wikipedia cites a TRAC article:

"At any time, it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution. It is desirable that the internal character code representation be identical to, or very similar to, the external code representation. In the present TRAC implementation, the internal character representation is based upon ASCII. Because TRAC procedures and text have the same representation inside and outside the processor, the term homoiconic is applicable, from homo meaning the same, and icon meaning representation."

> "it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution"

This means that interpreted Lisp source is "homoiconic". Compiled Lisp code with stored source code would not be "homoiconic".


Exactly. But the "code-is-data" processing in Lisp mistakenly conflated with homoiconic does not depend on Lisp being interpreted. That interpreted Lisp is homoiconic is of no consequence. It's nothing worth implementing elsewhere.

Homoiconicity is just a poorly optimized, simple language implementation technique for low-resource systems. (Low resource because if you keep the program definitions in the program run-time, you don't need a separate copy in a text editor, which would take up more RAM. This is why old-fashioned line-number BASIC is homoiconic.)

Today, we have no need for it. For values of today being the last 30-40 years.

Source code is kept in the buffers of an IDE; there is no need to have it squirreled away in the run-time. A running image can load compiled code; the goal of redefining the program while it is active doesn't require homoiconicity.


Your opinion is fringe. Here is a quote from wiki: "In computer programming, homoiconicity (from the Greek words homo- meaning "the same" and icon meaning "representation") is a property of some programming languages. A language is homoiconic if a program written in it can be manipulated as data using the language.[1] The program's internal representation can thus be inferred just by reading the program itself. This property is often summarized by saying that the language treats code as data."


Interesting. So we can compute source code!

What's the equivalent of this: take code and double the numbers in it, then execute it:

    CL-USER 33 > (eval (mapcar (lambda (atom)
                                 (typecase atom
                                   (number (expt atom 2))
                                   (t      atom)))
                               (cons '*
                                     (list 3 4 5))))
    3600


I used all those three languages in production, and know quite well the differences.

You're point is completely irrelevant to the usefulness of GNU Smalltalk.


I recommend you to reread the thread. I said that Smalltalk is far closer to Lisp than TCL, not if it more useful or less than either.


Is it actively developed? Tried to use it once, but the class browser didn't work under Ubuntu or Debian. At least not in the version available at the repositories.


Sadly it seems to be abandoned. It is not "real" Smalltalk, it is more like just yet another scripting languag, so no class browser.


Last time I checked the mailing list there was a bit of activity.

It hasn't got much activity there, but it's not dead.


I still use it anyway.


It actually has one, gst-browser, but IIRC, it relies on a GTK version that is too old, so when we attempt to run it, GNU Smalltalk crashes.


I remember clear as water to this day the feeling when I was a teenager doing C/C++ that it was with Smalltalk the first time I experienced the power of creating software that doesn't give you bad surprises for unknown reasons.


Given Objective C is modelled after this I think, would that be a better choice today (ignore Swift as that is not dynamic?)?


can I have more of this ? I mean smalltalk dev writing code while explaining


8 years ago i made a video series about making a website with a REST API using Pharo. i still have the videos on a backup somewhere, but the site where it was posted got lost. however, it turns out that archive.org luckily has a copy including the videos:

Using the FileSystem class in Pharo Smalltalk: http://web.archive.org/web/20211017101402/http://societyserv...

Serving files through FileSystem in Pharo Smalltalk: http://web.archive.org/web/20211017092114/http://societyserv...

A static webapplication hosted on Pharo Smalltalk: http://web.archive.org/web/20211017095623/http://societyserv...

Building an API with Zinc-REST in Pharo Smalltalk: http://web.archive.org/web/20211017100201/http://societyserv...


MountainWest RubyConf 2014 - But Really, You Should Learn Smalltalk - https://www.youtube.com/watch?v=eGaKZBr0ga4

7 minutes of Pharo Smalltalk for Rubyists - https://www.youtube.com/watch?v=HOuZyOKa91o


Smart Lock on


Does anyone have any advice on how to become an auteur software engineer? I've tried visual programming, functional programming, and several others, and they are just so much more enjoyable and powerful (in a specific sense akin to productive) than mainstream languages like Python, C++, etc. But I'm relatively exhausted by having platforms dictated to me by the middle of the bell curve and, frankly, people not experienced. Even as something as simple.and pragmatic as F# or Elixir seems to make people step back from you, as if you're cursed, much less something like Smalltalk. At this point in my career, I have given up and am just going to use Python and C++ like everyone wants you to. It's been a big enough hurdle yo probe that Incan write in Python and C++ despite having used a plethora of languages.

So, how do you use these powerful platforms banished to personal projects? How does one get into a position such that you can dictate the platforms used? Is there anyone who has done this?

It's an unfortunate case of people having the interesting applications having a very tunnel visioned view of what could be with software.


Real programmers can write Fortran in any language.

I don't write Fortran, but I can make my C++ look a lot like a different language. Knowing how those other languages work makes my code better. Sure the C++ syntax is ugly, but the design is what matters and that take inspiration from languages other than C++.


> Real programmers can write Fortran in any language. I don't write Fortran, but I can make my C++ look a lot like a different language.

That sounds like the wrong thing to do and a nightmare to encounter. I guess I'm not a "real programmer", which bolsters my original point.

> Knowing how those other languages work makes my code better.

Knowing is good. But it goes both ways as well. But C++ devs are fairly notorious for not liking different things. See Rust's history.

> Sure the C++ syntax is ugly, but the design is what matters and that take inspiration from languages other than C++.

The syntax isn't the point, or is at least a minor one. The main point is that languages like C++ make designing harder than it needs to be. Also, in a language like C++, nearly anything can happen at any moment. It's like rolling dice for every function call. God may play dice, but I don't like to.


The fortran thing is from 1982. Worth a read\laugh https://www.ecb.torontomu.ca/~elf/hack/realmen.html

as for the rest, good or bad idea depends on what you do. Some things are good, some bad.


> The fortran thing is from 1982. Worth a read\laugh https://www.ecb.torontomu.ca/~elf/hack/realmen.html

Ah, I see. Thanks for the reference. I'll give it a read.


The main point is that languages like C++ make designs possible. The language doesn't dictate the design, the programmer does; the language will do whatever you need for it to do. You can make beautiful designs or ugly ones, the language doesn't care. But it will never refuse to execute your beautiful design just because that doesn't match its opinions about purity.

There are reasons why C++ is the preferred language for difficult problems, and why the highest-paid programmmers all use C++ in their work. It is not because of some shadowy conspiracy dictating the future. Every detail of C++ was chosen for its usefulness first, so each new ISO C++ Standard is more useful and usable than the last. More people pick up C++ to use professionally for the first time each (short time unit) than the total number paid to use (current hipster language). That will be true for a long time.

Most importantly, C++ will never "run out of steam" when your project grows beyond its initial intention.


I recommend that you stop loving your tools and focus on solving problems.


And I ask, why not both? It's gonna be a rough day at the job site hanging dry wall with a screwdriver.

There isn't some dichotomy that makes us choose between good tools and solving problems.


A screwdriver is not effective at that job, that’s the whole point. A good tool is a suitable and effective tool, not necessarily an aesthetically pleasant tool. Beauty of the tool is not a relevant metric


I'm not sure what you're talking about. You said to stop worrying about tools, and I gave an example where the tool does indeed matter, and then you agree with me but act like you're disagreeing.

> A good tool is a suitable and effective tool, not necessarily an aesthetically pleasant tool. Beauty of the tool is not a relevant metric

No one is hinging upon aesthetics. They simply come with tools that have been purposefully designed. I might suggest reading some architecture books, from the older architects like Frank Lloyd Wright, or books on complex systems about the relationship between form and function.


I’m clarifying my point. You understand my point now I’m sure. And yes there are many many programmers who get hung up on aesthetics


No, I don't understand your point because you said to stop loving your tools, which I don't agree with.


The charitable (and more reasonable) way to interpret 'loving your tools' is not 'liking your tools intensely' but rather 'don't enter into an exclusive romantic relationship with your preferred tools'.

Generally if you can prove your ability to solve problems you get more credibility when deciding the tech stack, but given your assertion elsewhere that "tools are often dictated by those less experienced or even unknowledgeable", then your original question is basically how to prove yourself better than the "middle of the bell curve" under an unfair system.

I mean, the points you raised there are probably factual, but it's apparently not a really helpful (to yourself) way to look at things. Generally smart people are able to surround themselves with similar people, and don't really need to find ways to convince unknowledgeable people that they're smart. Software engineering is a huge field and I'm sure you could find people who more resonate with you.

And sometimes it just takes some years for people to realize you are right.

All that said, I think on the tooling part, it all just boils down to taste. Some tastes are more refined, but they're largely subjective or situational.


hnfong explained it better than I did


paraphrasing Russ Ackoff, intelligent people don't aim to solve problems but to dissolve them, and the best way to do this is to build whatever it is you're building on excellent tools. We would have significantly fewer problems in the world of software if people started paying attention to the quality and mastery of their tools.


The issue is that programmers often focus on the aesthetic aspects of their tools. They like smalltalk and lisp because they are somehow elegant and beautiful. However this is a red herring. The only metric that should be considered is how effective the tool is for solving the problem. Discerning between utility and aesthetics of a tool is an essential skill for any programmer to learn


I liked Smalltalk because the language implementation provided source code for all the IDE tools: examples of how to build rich GUIs when that was new.

I liked Smalltalk because I could watch people working with a prototype and learn how to transform the work they were doing.

I liked Smalltalk with ENVY/Developer because I could see when someone created new unreleased editions of methods on classes that I'd changed, talk with them, resolve clashes, merge and release, in a tight development cycle.


Speaking for myself, utility plays a big role in how I perceive a tool in terms of aesthetics. It's not only about a language's grammar fitting in a business card and stuff like that.

Python and JavaScript are two languages that have been used for stuff they are obviously unsuitable for, at a vast scale.


Can you give an example of something Python has been used for that it’s not suitable for?


Scientific computing. I can't just write a "for" loop and expect it to perform. I have to use whatever is provided by packages which weren't written in it, or make a similar package myself.

I can't imagine any technical reason for this situation. Only perceived convenience and some kind of social dynamics.


No one brought up aesthetics except for you.


Yes, I know


> Russ Ackoff

Which book of his should I start with? Thanks for the reference.


The Art of Problem Solving is a good start given that most of his other work is more management oriented. But if you want an introduction to his kind of thinking, Thinking in Systems by Donella Meadows is fantastic as well.


Thanks for the suggestion! Yes, I'm familiar with Thinking in Systems and other systems thinkers but for whatever reason haven't read anything from Ackoff.


You can use a language chosen by the middle of the bell curve and still get a lot of satisfaction and success out of being on the right of the bell curve of skill in that language.

There are many mundane C++ programmers out there just getting things done, and there are world experts who always come up with something unique, interesting, and exceptionally valuable to do with it.

Most importantly, what they do has value to their company or even to the whole industry, so they get the full benefit of how popular their technology is while still getting the full enjoyment of doing something clever in it.

If you do want something less mundane than C++, try Rust, it also has a high skill ceiling but you'll spend less of your time fighting tech debt. I think this is the sweet spot for language choice right now, but I can still have a lot of fun writing delicately crafted Go despite the obvious and intentional appeal to the lowest common denominator there.


> You can use a language chosen by the middle of the bell curve and still get a lot of satisfaction and success out of being on the right of the bell curve of skill in that language.

This is very good pragmatic and realistic advice and really the only way to stay sane.

> If you do want something less mundane than C++, try Rust, it also has a high skill ceiling but you'll spend less of your time fighting tech debt. I think this is the sweet spot for language choice right now, but I can still have a lot of fun writing delicately crafted Go despite the obvious and intentional appeal to the lowest common denominator there.

I personally feel F# is a sweeter spot. Also, I have found Rust jobs hard to get because most positions seemingly want 5-10 years experience. I'm interested in the language but have run out of steam using up my spare time just to learn new languages rather than doing something more interesting or just reading and other hobbies.


Have you found F# jobs more abundant than Rust jobs? We might be in different bubbles. I know F# has been around much longer and is a first-class citizen in Microsoft's ecosystem, but I don't touch that ecosystem much. Shame, too, because I was a Scala fan a decade ago and it was hard to watch that self-destruct, all the while F# endured.

> I have found Rust jobs hard to get because most positions seemingly want 5-10 years experience

I would think of that as potential employers screening themselves out without wasting your time interviewing.

The kind of places I've worked already heavily use C++ and Go -- think world-scale production infrastructure with management and control planes -- so many teams adopt Rust in some form at some point. So suddenly a lot of people have, if not outright Rust jobs, then at least jobs where they could spend a large fraction of their time writing Rust, and on exactly the kind of mission-critical performance-sensitive software that Rust is ideal for.


> Have you found F# jobs more abundant than Rust jobs?

Absolutely not. Haha. F# jobs are non-existent. I only meant that from a language point of view.


> how to become an auteur software engineer

Step one is "get a real day job first", I guess.

(The same advice applies to writing and music too.)


It’s all about doing. Next time you need to prototype something do it in your ideal language (this is the crux as you need to knock it out of the park). The next phase is showing how much more work it would have been in blub. The final phase is being prepared and capable enough to teach your co-workers!


I won't say that the platform is irrelevant, but not the main problem.

You need to be so much more productive than the rest that, after following the usual useless mandatory hypertrophied process, you still have time to apply a sane problem-solving strategy.

That means: make your own tools for the platform you're forced to use.


> You need to be so much more productive than the rest that

In my personal experience, one would be surprised about how little that (productivity) matters to people. In my experience, tools are often dictated by those less experienced or even unknowledgeable. Thus, their criterion is based upon what they've heard of and what they perceive their self and others being comfortable with.


Jdhbf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: