Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Faust: Functional programming language for sound synthesis and audio processing (grame.fr)
222 points by shiryel on Sept 4, 2021 | hide | past | favorite | 27 comments


Anyone interested in functional programming and sound synthesis might also find YampaSynth[0] interesting. It's a Haskell library based on declarative programming of modular synthesizers[1]. Definitely some very cool concepts.

[0] https://hackage.haskell.org/package/YampaSynth

[1] https://www.researchgate.net/publication/234793878_Switched-...


Looks cool, thanks for the link. I can’t find any documentation or info on actually using it, know if there are any articles or other resources floating around?


The full YampaSynth paper shows how to build synths using the library: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.15...

There’s also an F# implementation with some documentation here: https://github.com/brianberns/FYampaSynth


I think Faust is great and one of the most fun things I've discovered in the past year... I've built a trombone simulator with it, a speech synthesizer, and now I'm making a DIY spatial audio setup. It compiles to C++ so it's portable to basically any platform, and it's super easy to produce webassembly, or a macOS executable, etc etc etc.


Really fascinating. I was able just now to generate a vsti for windows32 from one of the code example, that works!! There is no real UI but the parameters are accessible (like the plugins made by Airwindows for instance).

For some reason the win64 version hangs the host (Reaper), but seeing it work that simply is super exciting.

What did you use to learn Faust? Just the online tutorial or some specific video? Do you make vsts/vstis of the things you mention? Are they published somewhere?


Stephane, the main Faust developer, recently announced this list of resources to learn Faust on the mailing list: https://github.com/grame-cncm/faustwebsite/blob/master/mkdoc....


By far the best tool I found was the online IDE [1]. Your program compiles to webassembly as you type it, and it plays in the browser. An interactive diagram is generated representing your program, so you can easily see if you made a mistake somewhere (though not always easy to see just where your mistake is).

I also learned by looking at the standard libraries' source code [2] and the online manual [3]. For the actual DSP knowledge, which I was learning at the same time and which isn't specific to Faust, I was reading J. O. Smith III's writings [4] (in particular, Physical Audio Signal Processing was most relevant to me).

I published my trombone simulator [5] as a web app but without the source. I haven't published the speech synthesizer or spatial audio stuff but I may in the future.

[1] https://faustide.grame.fr/

[2] https://github.com/grame-cncm/faustlibraries/

[3] https://faustdoc.grame.fr/

[4] https://ccrma.stanford.edu/~jos/

[5] https://nuchi.github.io/trombone/


What impresses me with Faust is the sheer number of targets they have: most audio plugin formats out there, but also web audio plugins, the new SOUL language (https://soul.dev/), Rust, C++, C, LLVM bitcode.


Also worth checking out Omni.

https://github.com/vitreo12/omni


I’m a big fan of Faust, have used it before in many of my own audio prototypes. Its a great learning resource if you’re new to dsp.

It’s been an inspiration for me too in building Elementary Audio [1], which you might find interesting if you’re a JavaScript developer

[1] https://www.elementary.audio



This 2015 video (25m) is a useful basics-to-examples (20m) intro to Faust (Juce) for programmers. (by Oli Larkin)

https://www.youtube.com/watch?v=INlqClEOhak




It seems to me, after quickly reading liquisoap documentation, that Faust is much more low-level: liquidsoap would connect different audio effects, whereas Faust can be used to write the audio effects, at the level of sample-by-sample processing.


I would suggest to actually look at the Faust website and you will see that they are completely different types of applications.



Declaring version, author, license and copyright in these kinds of modules seems a smart thing to do.


The problem with audio processing has always been latency. Are there functional programming languages out there which can guarantee bounds on latency?


OCAML is used extensively for high-frequency trading, so indeed functional languages are used all the time in low latency settings.

Galois has done work in Haskell for developing hard real-time applications. Here’s a dataflow language they built for example: https://leepike.github.io/pub_pages/rv2010.html


It looks like Copilot is an EDSL for making C99 programs. I would think both OCAML and Haskell (and Java for that matter), as GC'd runtime langs, would be unsuitable for realtime "out of the box".

The best marriage of functional concepts and realtime I've seen is Supercollider (https://supercollider.github.io/) which basically has a smalltalk-like lang control a realtime backend. I'd love to see something like this for Haskell + Faust!


Thanks for the comment, supercollider is really neat! Also just learned about Tidal Cycles, a Haskell library that builds on SuperCollider with a higher level functional syntax.


It's functional, not interpreted, if that's what you're getting at.

It compiles down to deterministic C++. Each frame of audio takes exactly the same computation path; ditto for each sample within each frame. (Although some parameters might be updated once per frame, and others updated once per sample.)

You could inspect the output C++ and count up the number of cycles for each frame of audio, if you wanted.


Haskell is also a compiled language, but an often heard complaint is that it doesn't offer a good handle on CPU usage and hence computation time.


> Are there functional programming languages out there which can guarantee bounds on latency?

faust programs basically define a signal flow that gets compiled down to various languages, there won't be any latency besides the one that may be part of your own algorithm.


The problem with *digital audio processing is latency.

Math operations on a DSP/CPU require cycle time. Cycle time can vary, but even a simple comparison can be 1-2 cycles, which could be nanoseconds of latency.


XTLang / Extempore?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: