Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 1.21 Released (go.dev)
322 points by Mawr on Aug 8, 2023 | hide | past | favorite | 168 comments


> When printing very deep stacks, the runtime now prints the first 50 (innermost) frames followed by the bottom 50 (outermost) frames, rather than just printing the first 100 frames. This makes it easier to see how deeply recursive stacks started, and is especially valuable for debugging stack overflows.

This is a very nice QoL improvement, I wish more languages made the change.


100+ frames deep? I am a python dev, is that normal in Go? Sounds atrocious


Python will also do something like that, specifically it snips out repeated lines in self-recursive calls: https://github.com/python/cpython/issues/71010

    >>> a()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 3, in a
      File "<stdin>", line 3, in a
      File "<stdin>", line 3, in a
      [Previous line repeated 894 more times]
      File "<stdin>", line 2, in a
      File "<stdin>", line 2, in b
      File "<stdin>", line 2, in c
      File "<stdin>", line 2, in c
      File "<stdin>", line 2, in c
      [Previous line repeated 97 more times]
    RecursionError: maximum recursion depth exceeded
Sadly it's not smart enough to handle indirect recursion, so in those cases you get a thousand lines of garbage.


“This makes it easier to see how deeply recursive stacks started, and is especially valuable for debugging stack overflows.”

Mistakes are normal in any language. Debugging an infinitely deep recursion can be challenging in most.


You will never hear something again from developers trying to hunt down errors in infinite recursion ;-)


Well its normal in Java and from what I have seen despite being advised against it a lot of folks coming from Java to Go have brought along Java style heavily abstracted code, framework first design patterns with them. This can definitely lead to very deep stack traces.


Or just devs started developing more and more complex systems, or libs with more and more extendability needs.


It is normal in any language when you accidentally write an infinite recursion.


Seems to be the smallest Go release yet (at least since 1.4). I wonder how the binary size was reduced in the recent two versions?

   1.21    63MB
   1.20    95MB
   1.19   142MB
   1.18   135MB
   1.17   129MB
   1.16   123MB
   1.15   116MB
   1.14   118MB
   1.13   114MB
   1.12   121MB
   1.11   121MB
   1.10   114MB
   1.9     98MB
   1.8     86MB
   1.7     78MB
   1.6     81MB
   1.5     74MB
(linux-amd64 archives from https://go.dev/dl/)


Long term intentional effort: https://go.dev/issue/27151


All I could find for context about what caused it (which is not reflected in that issue)

> Go command The directory $GOROOT/pkg no longer stores pre-compiled package archives for the standard library: go install no longer writes them, the go build no longer checks for them, and the Go distribution no longer ships them. Instead, packages in the standard library are built as needed and cached in the build cache, just like packages outside GOROOT. This change reduces the size of the Go distribution and also avoids C toolchain skew for packages that use cgo.

https://tip.golang.org/doc/go1.20


Other two comments are wrong. The correct answer is that the compiler now builds everything on demand if it's not already built. This now includes the standard library as well. Previously built versions of standard library modules were being shipped.


It seems like it rebuilds itself with PGO. I can't tell if I'm being paranoid/didn't only include stdlib in something, but I ran some random programs, restarted my computer and the install is now at 320mb, vs. 213 when I first untarred it.

https://go.dev/doc/pgo

> PGO in Go applies to the entire program. All packages are rebuilt to consider potential profile-guided optimizations, including standard library packages.


PGO is opt in, just like Rust. you have to explicitly tell the compiler that you want to use it.


Go1.21 uses PGO by default, if the file default.pgo exists.

> The -pgo build flag now defaults to -pgo=auto, and the restriction of specifying a single main package on the command line is now removed. If a file named default.pgo is present in the main package's directory, the go command will use it to enable profile-guided optimization for building the corresponding program.


Possibly compiler/linker improvements. For example, FTA:

“In Go 1.21 the linker (with help from the compiler) is now capable of deleting dead (unreferenced) global map variables, if the number of entries in the variable initializer is sufficiently large, and if the initializer expressions are side-effect free.”

There also may be differences in debug symbols in the various binaries.


Felix Geisendörfer did a bunch of work to improve continuous profiling performance [1].

[1]: https://blog.felixge.de/waiting-for-go1-21-execution-tracing...


Indeed, this is awesome work.


I'm looking forward to the log/slog package. I might start switching over from our homegrown solution next week.


We’re currently using zerolog, I’ll also give a try to slog when I have an hour to kill. One less third party dependency is always good to take!


Be aware that there is a performance impact compared to using zerolog directly [0] (my uneducated guess is it is likely due to pointer indirection).

[0]: https://github.com/rs/zerolog/issues/571#issuecomment-166202...


The backwards compatibility promise (as always) is really nice. Java/JVM is the only other language/platform that I can think of that does more or less the same. Using both extensively by the way.


Common Lisp has been standardized for longer than Java's life.

Also in the lisp world, Clojure has a fastidious dedication to backward compatibility.

Perl 5 has maintained backward compatibility (though I am not certain if it is 100% compatible) by putting changes behind flags.


C++ takes compatibility very serious. Though it has ABI stuff here and there you need to deal with due to its fully native AOT compilation nature.


Kind of, there are plenty of breaking changes as well, three that quickly come to mind, removal of exception specifications, std::auto_ptr<>() removal, and the new semantic changes due to the way <=> behaves.


For the former two, breakages have replacements and are a vestige noone used for a long time. Besides that, compilers have switches to treat throw() as noexcept.

As for the latter, those breakages are relatively rare.


Someone still has to write those replacements, and no, not all compilers have such switches.

Besides that was only a quick set of examples, if I would bother going through my ISO copies, there would be a couple more to list.


Do you still think those are breakages that get in the middle a lot?


Depends on how lucky one happens to be, specially in large code bases, touched by dozens of consulting companies, and offshoring deliveries.


Just curious, isn't Go compilation fully native? Java I know, it's technically interpreted (transpilation not compilation), but Go?


Yes, I like to build my castles on solid ground rather than on quicksand.


Many other languages give the backwards compatibility promise. For instance Vlang (https://vlang.io), does as well. But in their case, they limit it to a 3 year backward compatibility promise, which kind of makes more sense (also due to them being in beta). It gives them more flexibility to make needed changes to the language, but enough stability to limit issues for users. Though in general, such more open languages are going to be more responsive to users and their requests/needs, because of how they are structured.


Java/JVM have breaking changes from Java 9 onwards, and now deprecated stuff is actually removed, not left around until the end of universe.


Pretty sure Rust has this too right?


Rust is definitely not backwards-compatible in the sense that Go is backwards-compatible.


How so? What breaking changes have you hit? It has the same guarantee and takes it extremely seriously. The only code I know of that compiled on a stable post-1.0 compiler but broke was because of bugs, even those cases are rare.


I think std async is about as good an example as any.


The third-party crate which is not part of the language? https://github.com/async-rs/async-std


Do you mean async-std as in the 3rd party crate?


Both are too young to have it evaluated fairly. It is easy to be backwards compatible on a short term.


Go is more than 10 years old, that's certainly old enough.


Not really. Rust have had backward compatibility problems already during its young life.

Why do you think you need to specify release and nightly to successfully build some projects?


Because the project is using nightly features? Which are specifically not guaranteed to remain or be stabilised?

That’s like complaining that the exp packages are not stable.


That is because features might be feature complete and usable, but not consider stable yet and thus not part of stable builds. This because rust takes backwards compatibility seriously, it isn't an example of it breaking it.


It's never backwards compatibility, its that a lot of code depends on features that are not present in older releases.


> Go 1.21 now defines that if a goroutine is panicking and recover was called directly by a deferred function, the return value of recover is guaranteed not to be nil. To ensure this, calling panic with a nil interface value (or an untyped nil) causes a run-time panic of type *runtime.PanicNilError.

> To support programs written for older versions of Go, nil panics can be re-enabled by setting GODEBUG=panicnil=1. This setting is enabled automatically when compiling a program whose main package is in a module with that declares go 1.20 or earlier.

An explanation of why this change was implemented is contained in the description of the implementing change, https://go-review.googlesource.com/c/go/+/461956. In short, calling `panic(nil)` before would make `recover` return nil, which is what's returned when there is no panic. `recover` doesn't have a `panicArg, ok = recover()` return value variant, so they fixed it this way.


I'm a recent convert and enjoying it quite a bit for webdev. There's lots of books out there, but I found let's go and let's go further to be great project based books


I'm a recent convert as well. Thanks for the recommendation, I'll look them up.



Re: https://github.com/golang/go/wiki/LoopvarExperiment

I've mainly used Java which doesn't suffer from this particular problem. But I've often heard Java criticized on the grounds it captures values not variables, and that most other languages apparently capture variables. I remember learning Lisp at uni and being taught that it captured variables, for example.

So do other languages suffer from this problem as well? How do they deal with it?


Note that there are two separate components to the issue:

1. the interaction between mutable locations and closures

In most languages with mutable bindings, if you close over a variable you're closing over a "cell" and will see future modifications to it. This is the case in Go, Python, Ruby, Javascript, C#, ...

A few languages avoid it e.g.

- Java, not because it "captures values" but because it only allows capturing "final" variables so you can't update the bindings (you can still mutate in place when capturing reference types)

- C++, because you have to specify the capture mode and it'll only be affected if you capture by reference (which you wouldn't do for a loop variable), so the likelihood of hitting this issue is pretty low

- Rust, because even if you capture by reference you can't modify a binding with an outstanding reference

2. the scoping of loop variables, if you're capturing the loop variable but each iteration has a different version of the loop variable then you don't really mind too much, because each closure will capture its own loop variable, this doesn't fix the above issue but it does fix the most common occurrence of it

That's what Go is doing, other languages which have done that are C# (when using a `foreach` loop), Javascript (when using `let` or `const` for the loop variable), ...

Also Ruby, kind-of: using the `each` method for iteration is very common, and since then the loop variable is a parameter to the block it's basically a per-iteration local, "for...in" still has the issue. I guess C# also has that when using the linq ForEach method but I don't know how common that is.


Probably worth pointing out that C# used to have Go's behavior and they changed it (a rare breaking language change) specifically because it was such a notorious pitfall.

I think Go is doing the right thing here. I'm somewhat surprised they didn't always do this. (Dart has created a new loop variable for each iteration all the way back since before 1.0.)


Similarly in javascript, “let” and “const” we’re added specifically with block and per-iteration scoping, because the ever increasing popularity of callbacks for async was making the function-scoped var more infuriating every day.


In some cases, Common Lisp makes it implementation-defined. E.g. in (dolist (item list) ...) it is implementation-defined whether x is a freshly bound lexical, or a stepped variable.

In Lisp, you can easily write macros that have whatever semantics you want. You can easily write a my-dolist which is like dolist but freshly binds the item variable for every iteration, and next to it write a my-dolist2 which assigns a single variable.

In languages like Go, the providers of your language provide all the syntax from behind a wall, and make these decisions for you. Knights in shining armor duke it out on the rooftop of an ivory tower, while lower vassals blog incessantly about the important work being done and decisions being made.

By the way, MacCarthy's ancient Lisp (Lisp 1, Lisp 1.5) didn't have lexical scope, so it would not have made any difference whether each iteration stepped the variable with setq or bound it with let. There would just be the one dynamic variable in all cases, not captured by any lexical closures (those being nonexistent). You'd want to bind it at least once with let so the prior value is restored when the loop is done.

In Common Lisp, if we use a special variable as the loop variable, it likewise won't make a difference whether it is stepped or bound each time, since the binding isn't lexical.


Macros is exactly what makes Lisp a one-man only language for every person unless you are really careful.

That, combined with the fact that things are often poorly documented is what IMHO ruins the language ecosystem.

I want to like Lisp and I love its interactivity but I find the language a bit more "construction material" language than a ready-to-use immediately language. What is bad about that is exactly what you mention: everyone will come up with a set of macros that others cannot fully grasp.


A poorly documented and tested libraries will give you problems no matter what kinds of entities it defines. Someone who writes macros that nobody can easily grasp will also write functions nobody can easily grasp.

If you have to reverse engineer someone's macros, you have all of the following going for you

The macros run at compile time and then go away (unless the program is based on a dynamic compiling paradigm). They have no behaviors at run-time; their generated code does. No matter how complicated the macro, you can just run it and capture the output code, and through multiple examples get an understanding of what it's doing.

If someone writes a buggered function, you may have to end up debugging it on a target system, perhaps an embedded one.

The sky is the limit there. If it's a function in a kernel, you may have to debug some race condition between it, some interrupt handler and a piece of hardware.

No such thing ever happens when debugging a macro itself.

A macro doesn't interact with complex application state, because that doesn't exist. You never have to attach some gigabyte database and get some objects into the right state before reproducing some expansion problem in a macro. Code generated by a macro could be involved in a problem like that, possibly as a root cause, but tracing through the macro itself isn't.

The people who write awful macros are usually not smart enough to do anything that is actually hard to understand. The problems you will find are lack of hygiene, multiple evaluations and such.


> A poorly documented and tested libraries will give you problems no matter what kinds of entities it defines. Someone who writes macros that nobody can easily grasp will also write functions nobody can easily grasp

Lisp makes this so easy to write macros and they have been promoted to be the "my-powerful-dsl-is-the-right-way-to-solve-a-problem" that, IMHO, it works actively against team work unless it is really well-dcumented. Now, mix that with the fact that code is not usually deeply documented and you get a cultural problem into the whole ecosystem.

> A macro doesn't interact with complex application state, because that doesn't exist. You never have to attach some gigabyte database and get some objects into the right state before reproducing some expansion problem in a macro. Code generated by a macro could be involved in a problem like that, possibly as a root cause, but tracing through the macro itself isn't

Tracing the macro can be as difficult as its expansion. If it gets obtuse it will add a lot of time to your workflow. I know they are powerful but it is probably the last feature I would use in a team, with a lot of care, and for trivial things or for really justified cases and with good documentation. If your code is full of macros, forget about readability, you threw it out through the window.

> The people who write awful macros are usually not smart enough to do anything that is actually hard to understand. The problems you will find are lack of hygiene, multiple evaluations and such

Because writing macros is not that simple in the first place. I find much easier to write regular code and use the well-known macros than grasp anonymous macro code. There are things you can only do with macros though, like lazy evaluation. But macros must be managed with extra care. For example, maybe instead of a macro it is better to use a higher order function with a closure to keep the code understandable than burying things in layers of macros.


> Lisp makes this so easy to write macros ... Because writing macros is not that simple in the first place

You have to get your talking points all in a row.


In fact most languages suffer from this, but it's more acute in go because of how easy it is to create closures and threads.

For example, what does the following Python code print?

    funcs = list()
    for i in range(10):
        funcs.append(lambda: print(i))
    for f in funcs:
        f()


> most languages suffer from this

Specifically, I think, any language with closures where the loop variable exists in a scope enclosing all loop iterations with a value that is updated each iteration rather than a fresh variable in a scope local to each loop iteration has this problem.

In Ruby this wouldn't affect idiomatic looping methods like #each, but would seem likely to affect the less-idiomatic for loop, which is built on #each but has the control variable in the surrounding scope. (Not 100% certain how it desugars though.)


> In Ruby this wouldn't affect idiomatic looping methods like #each, but would seem likely to affect the less-idiomatic for loop, which is built on #each but has the control variable in the surrounding scope. (Not 100% certain how it desugars though.)

I don't know how it desugars, but it is indeed affected when using `for...in`


I didn't know the answer to this, but it seems like this actually behaves closer to `funcs.append("""print(i)""")` since

    funcs = list()
    for i in range(10):
        funcs.append(lambda: print(i))
        del i
    for f in funcs:
        f()
actually NameErrors during f() so I don't think it is actually closing over the i as much as the i was left in scope and the code was seemingly lazy evaluated


Python has real closures. In my example, it is closing over i. In your example, you're deleting the closed-over variable. It's a bit weird, and Python is the only language that I know of that lets you do this.


Right. In this example lambda does not capture i at all. You would need something like

    funcs.append(lambda i=i: print(i))


No, it definitely captures i

Maybe this example where it returns the closures from a function is convincing?

    def fs():
        funcs = list()
        for i in range(10):
            funcs.append(lambda: print(i))
        return funcs
   
    for f in fs():
        f()


prints all 9s.


Interesting, didn't think about closures in Python before. It seems that JS has the same behavior.

Java forces you to final copy the variable.


I lost track of ES scoping rules but apparently `for (var i ...) { ... }` doesn't capture (legacy behavior) while `for (let i ...) { ... }` will.


I don't think that's limited to just `var`, as a `let` outside of the `for` will do the same

    let funcs = [];
    ;(() => {
        let i;
        for (i = 0; i < 10; i++) {
            funcs.push(function() { console.log('i=',i); })
        }
    })()
    for (let f of funcs) {
        f.call(null);
    }
(I used an IIFE to hide the "let i" from the .call to ensure it wasn't late binding to the name like the python example did)


you think they have a special rule in `for (let .. of .. ) ... ` then ?


The “special rule” is that for(let) and for(const) create a new binding for each iteration, which mitigates the issue.


It's called late binding and some Pythonistas actually consider it a feature.


adding min/max to a language made 10 years ago is kinda crazy when you think about it. Really looking log/slog package, it seems neat and was deeply needed.


Hah, I just went down a whole rabbit hole on super-logarithms and tetration wondering why you found it so critical as to warrant standard-library inclusion.


Note that min/max were already available in the math package, but they’re a bit inconvenient to use (lots of casting to do)


While it's true that it had math.Min/Max, it's not just the inconvenience of castings that this solves. You can't cast uint64 to float64 and expect the function to return the correct result, e.g.:

  x1 := uint64(9007199254740993)
  x2 := uint64(9007199254740992)
  fmt.Println(uint64(math.Max(float64(x1), float64(x2))))
will print 9007199254740992.

min/max and math.Min/Max are not equivalent.

The correct way to write min/max before was to use comparison operators. math.Min/Max was there because it requires tricky special cases for handling -Inf, Inf, NaN, and negative zero.


The log/slog package is huge. I'm super happy to see them start-again to express opinions and solve more problem with just the standard library; and I hope to see more of it. This was Go's Thing at 1.0; you could build a production ready, large scale HTTP service with nothing but the standard library; and, probably, a structured logging package like Zap. Now; looks like we may not need it!

If anyone on the Go team is reading this; thanks! And, as hacker news must always be the armchair commentator, if I may ask for an addition in go 1.22: encoding/yaml. Build it in. Go and YAML are the languages of devops, and I've written so many scripts with JSON as a configuration file only because Go has a JSON parser built-in, but not YAML.


> if I may ask for an addition in go 1.22: encoding/yaml

FYI, encoding/yaml was recently requested, and was declined in https://github.com/golang/go/issues/61023#issuecomment-16106..., for reasons I largely agree with.

(I work for Google but not on Go)


> I've written so many scripts with JSON as a configuration file only because Go has a JSON parser built-in, but not YAML.

Sounds like a feature. Less Yaml in the world is a good thing.


Helloooooo riscv64-linux support! EDIT: lol this fickle site. Not my fault they didn't publicize the fact that they're now publishing riscv64 binary releases (allowing distros to bootstrap riscv64 targets now).


Not sure why you're getting downvoted. We got a pull request to add this to our Caddy build process today: https://github.com/caddyserver/caddy/pull/5720


seems the submissions for "Go $v Released" have been 50/50 pointing to the doc url versus the blog url: https://go.dev/blog/go1.21


Stuff keeps moving out of the standard lib related to crypto. First was PGP, now elliptic. Why doesn't Go want to provide stdlib crypto stuff?


Go is absolutely committed to the cryptography standard library. What's happening is that we're deprecating legacy protocols and APIs to invest resources on packages that better align with the Cryptography Principles [0].

In particular, crypto/elliptic was an unfortunate API that has been deprecated in favor of the new crypto/ecdh. Most applications can migrate (on their own time, as we don't break backwards compatibility even for deprecated packages) and get better security and performance. (A very small portion of applications might need lower level applications, in which case they can use third party modules based on the stdlib internals, like filippo.io/nistec.) You can read more on my Go 1.20 [1] and Go 1.21 [2] posts.

[0]: https://golang.org/design/cryptography-principles [1]: https://words.filippo.io/dispatches/go-1-20-cryptography/ [2]: https://words.filippo.io/dispatches/go-1-21-plan/


Two reasons I can think of:

1) speed to update it in the case of a bug / security issue.

2) backwards compatibility.

Getting a new Go release out is likely a lot more time consuming than bumping up the crypto library outside of there. And those libraries do not require full backwards compatibility compared to the Go standard library. So they can do breaking changes if need back without breaking the Go contract.


For 1) they create patch releases of Go versions fairly often. So I don't think that makes sense.


Perhaps they would prefer not to create them so frequently.


my bet is it has something to do with Filippo Valsorda having left google, but that's pure speculation on my part.

https://github.com/FiloSottile


Nope :) I'm still a maintainer, and actually happen to be the one that drove those two deprecations. https://words.filippo.io/full-time-maintainer/


Glad to have my wild speculation disproven :) And glad that becoming a full time independent maintainer is working out for you!

Though, in hopes you see this, I have to ask: Is there any sort of plan for if you wanted to step down from a maintainers role of the go crypto packages?

You're sort of known as "the" go crypto person, so I guess it occurred to me that your bus factor might be quite high. Any thoughts you can give to assuage fears in that regard? Is there anything the community can do to share the load, short of becoming cryptography experts?


How comfy is go for general native performance coding tasks? Basically looking for a C/C++ replacement. (I don't really like how Rust looks syntactically, and prefer GC over manual memory management.)


If you're looking for the absolute top speed, it's not as fast as C++ or rust. Assembly functions can't be inlined, for example. But go allows you to control your memory structures and that gets you good cache utilisation, compared to, say java. Go is plenty fast enough for most applications.


As C replacement, I would say relatively ok, specially if you look at it from the point of view of Limbo in Inferno.

As C++ replacement, I would stay is too lacking in features. For that I would rather reach out to C#, D, Java (with OpenJ9/Graal), or even Haskell/OCaml.


Personal experience is great, though for large projects C++ is definitely more robust.

Really like how go handles parallelism and creating threads.


I'm curious what are some of the reasons C++ is better for large projects?


I think Go is a really good fit to write async servers. Its set of features make it a killer language for that use case.


> I don't really like how Rust looks syntactically

I thought I was the only one!


It’s generally designed to be the replacement you’re looking for. It comes from google, who has a lot of C++ code. Its memory managed, but relatively fast. You can’t beat languages that are designed for speed (c, rust) but it’s not python, and it’s closer to rust than python in terms of speed. Similar to Java probably. If you’re writing a web server or anything similar you’ll be fine. I wouldn’t reach for it to crunch numbers though (eg a ML model, image manipulation).


Look into Nim or Zig instead if you're coming from C++


I would also recommend to take a look at D.


I'm not an expert at all, but I wouldn't. I would reach for nim or even zig before go. But it depends on what you're building, I'm sure!


> Go 1.21 improves build speed by up to 6%, largely thanks to building the compiler itself with PGO.

Go already compile fast, and it still gets faster release after release, love it!


No basic Map or Filter functions in the new slices pkg. Sadness.


I think it's a reasonable first pass at implementing generic functions.

I'd love to see Map/Filter/Reduce, but starting with the simplest set of generic functions and getting people comfortable with them is probably a good idea.

Go idiomatically is very imperative and introducing a new programming style at the same time as introducing generics might be too much change at once.

I'd also like to see an option and a result type at some point so we can stop doing nil checks all over the place.


Makes sense given the idea of getting some form of iterators support into the language, collection-specific HoFs means more duplication and worse composition as the intermediate collections generally need to be allocated on every step, and often can't be optimised away because you're stuck with your defined order of evaluation.


If we're lucky they won't add them until they have some halfway decent fusion working in the compiler.



So happy they called it delete. I can never remember if filter keeps or removes the values that evaluate to true when I use other languages.


>crypto/ecdsa PublicKey.Equal and PrivateKey.Equal now execute in constant time.

Glad to see this pushed. We went to look how Go was handling constant time comparisons and noticed it was not. This is not considered a serious security issue, as attacker controlled private key attacks are generally considered out of scope, but it's still nice for the library to be safe. (See https://github.com/golang/go/issues/53849)


The “Loopvar” experiment looks like backward compatibility nightmare. As confusing as the existing behaviour could be, the new experiment is asking for trouble.


It’s a breaking change, but rsc did very thorough testing of publicly available code and internal Google code and found essentially only bug fixes. But I think the risk is why this is an experiment and not a straight up change. Also, the new behavior would only be enabled on modules that are marked as Go 1.22 or above, not program globally, iiuc. So to me, this is very thoughtfully planned and unlikely to be any kind of nightmare



When C# shipped this change I was working on a large C# codebase and we were very worried about this. After auditing the entire codebase and checking every for loop, we found two pre-existing bugs in code which expected the 5.0 behavior and zero places which relied on the old behavior.

Everyone that I've ever talked to who went through the transition has a similar story. In practice no one even incidentally relied on the old behavior. Go rolling it out as an experimental opt-in feature is a good idea, but it'll probably turn out to be unnecessarily cautious and they could have gotten away with just unconditionally enabling it.


C# only did this on `foreach` loops, not on traditional `for` loops. Go plans to do this on both (`for-range` loops and traditional `for` loops).


> no programs will change behavior due to simply adopting the new Go toolchain

Seems like they thought about that. I think it's a good change.


Is it? The experiment is just that, a way to test your packages for breakage, the proposition to opt in via langfile is similar to Rust edition systems which work fine.

Way back when C# actually went through that change unconditionally (LangVersion was not a thing yet), and there was barely any breakage.


The existing behavior is extremely unlikely to be helpful, even accidentally. I will be shocked if anyone actually hits a problem related to this change. It may well fix more bugs than it introduces.


Forgetting to add `foo := foo` in a for loop tends to be bigger trouble. =)



For `for-range` loops, it looks the benefits of this change are more than the drawbacks.

For `for ...; ...; ...` loops, personally, I think the conclusion is the inverse.

One breaking case:

    func main() {
     defer println()
     for counter, i := 0, 0; i < 3; i++ {
      defer func() {
       counter++
       print(counter)
      }()
     }
    }
It will print 111 when the change is made.

More surprising cases: https://github.com/golang/go/issues/60078#issuecomment-15443...


Why don't any of the examples of surprise breakage link to real code relying this behavior?


why do you ask such a confusing question?


The question seems completely straightforward.


It is confusing that why such a question was asked. It shows the asker lacks of basic logical ability.


The experiment changes behavior, nobody's contesting that. Does it change behavior in any scenario that matters?


As far as I know, no one has proved that it doesn't change behavior in any scenario that matters, including the proposal makers.

Have you proved it?


Proving that is impossible. As said before, rsc has analyzed publicly available source and found the change fixed bugs in the ones where it made any difference.

If you want to argue, you really need to show an example of where the change causes a bug in any scenario that matters. With a real, preexisting, program.


If it is impossible, why do you ask me that question?

I totally disagree with your logic. I think you don't have the right logic here.

At least, the proposal should be accepted when the experiment period has lasted for a year. But now, the proposal has been accepted before the experiment period started. Is it weird?

If you want evidences, please read my recent tweets: https://twitter.com/go100and1

My logic: every domo code matters.


Proving the non-existence of a proprietary source file somewhere that contains code which will become buggy with this experiment is impossible; we don't know all the source code in the world.

Proving the existence of a program that contains code which will become buggy is easy: share the code. Show actual program code, not a hypothetical; nobody is saying the experiment doesn't change behavior, people are saying it only changes behavior that doesn't matter to real world code.

On real world code discovered so far, the experiment would fix bugs.


Your logic is too illogical. It is the proposal's supporters' responsibility to prove the proposal doesn't do harm. If they are unable to do this, please don't pretend that this has been proven, and please just stop making any arguments.

It is not my (and all other Go programmers') responsibility to do this. Here, I just shows the possibility that the proposal will do harm, and help others to find the broken cases easily. However, surely, Go programmers, in their spare time, can help the proposal's supporters to do this. But, again, it is strange that the proposal has been accepted before such attempts are made. :D, so weird.

And, please read my tweets, there are several factors which are unrelated to finding broken code. In fact, the change will lead to more error-prone, instead of less error-prone.


Im still looking for a language that will solve my biggest issue:

- dependency hell..

Recently had to update unmaintained project by 5 dependency versions higher. It was so painful with GO.


Vendoring might help with building old code. It stores a copy of depencies in a directory which you can include in git. https://go.dev/ref/mod#vendoring

As for upgrading old libraries which might have broken backwards compatibility, I can't offer much.


how would that help with building old code? is old code often unavailable upstream?


Vendoring creates a copy of the dependency files. So no internet connection would be required to build it IF (and that is a big if) the project dependencies were vendored and versioned back then.

Interestingly, commiting vendored files is a common point of contention between devs. Some argue that it adds bloat to the project repo.

Unless I'm working with TB sized monorepos, I always vendor and commit, because hardware is cheap and my time valuable. I have 10 year old projects that build just fine without internet connection because all deps are commited with the project.

As a bonus I get a free diff of every single line that changed when a depency is updated.


I don't use Go a lot anymore, but I sometimes use Nix's buildGoModule:

https://nixos.wiki/wiki/Go

Oh wait... You mean because of breaking language changes?


Needing to change an entire dependency version (let alone four more versions) is a sign of human error.

It is unlikely that software will fix a bumbling human. No matter how hard it tries, humans will always find some new way to do something stupid.


I am really excited to start using the new structured logger.


Yes!

Logging always felt like something that (1) almost every project needs, and (2) there are too many 3rd party packages, with no "de facto" choice.

This is particularly painful when working with 3rd party library code. There's a high chance that the consumer of library code uses a different logging package to the library author.

Having structured logging in the stdlib is fantastic, because now there is a "go way" to this that libraries can use.


Does this mean that all the third party logging packages will likely die out in the future? Or will they likely implement adapters to be compatible with the std interface?


From the original proposal <https://go.googlesource.com/proposal/+/master/design/56345-s...>

1 "We expect existing logging packages to coexist with this one for the foreseeable future."

2 "We expect that this proposal’s handlers will be implemented for all popular logging formats and network protocols, and that every common logging framework will provide a shim from their own backend to a handler"


Even after seeing the API, which is enormous and wildly non-intuitive?


I've seen the API, and it's not either of those things if you've already used a structured logger?


I've never seen a structured logging library with concepts like Attrs, Groups, or Records like those defined by slog.


Attr is the same concept as https://pkg.go.dev/go.uber.org/zap#Field , and Zap pioneered this whole field.

Group is pretty much https://pkg.go.dev/go.uber.org/zap#Namespace

Record is the interface to log sinks, not something the typical programmer worries about.


Haha, zap did not pioneer anything, what are you talking about.


"Haha", great attitude dissing other people's work. https://news.ycombinator.com/newsguidelines.html

What Go structured logging libraries predate Zap's 2016 creation? Only one I can remember is Logrus, which was using type Fields map[string]interface{}, the bad qualities of which are kinda the whole reason for Zap's creation, and slog follows the Zap-style API[1].

[1]: Though ignoring many of the optimizations..


Could you expand what you mean?

When skimming it I mostly saw the common API structure that I usually see in libraries / write myself when I need a logger, so I generally welcome the standardization.


slog defines the concepts of

- Logger -- created by New, accepting a Handler, providing a fixed set of level-based logging methods, asserting concepts of Attrs and Groups, not parameterizable by consumers

- Handler -- with two default implementations, asserting concepts of Attrs Groups and Records, parameterizable by consumers as long as they follow the semantics defined by the interface

- Attr -- arbitrary concept that maps to a k/v pair in a log record

- Group -- arbitrary concept that namespaces a set of k/v pairs in a log record

- Record -- arbitrary concept that requires a timestamp (expensive to compute), a level (one of a specifically defined enum which cannot be changed), and a PC stack pointer (obvious issues there)

I've never seen a logging package which meets these requirements.


What else would you expect from a structured logging package?

To me it absolutely makes sense as the default and standard for 99% of applications, and the API isn't much unlike something like Zap[0] (a popular Go structured logger).

The attributes aren't an "arbitrary" concept, they're a completely normal concept for structured loggers. Groups are maybe less standard, but reasonable nevertheless. The timestamp is not required - the documentation specifies it can be left as the zero value and shall be ignored in that case.

I'm not sure if you're aware that this is specifically a structured logging package. There already is a "simple" logging package[1] in the stdlib, has been there for ages, and isn't particularly fast either to my knowledge. If you want really fast you take a library (which would also make sure to optimize allocations heavily).

[0]: https://pkg.go.dev/go.uber.org/zap

[1]: https://pkg.go.dev/log


I'm intimately familiar with structured logging.

The domain concepts defined by slog are unquestionably abnormal.

From a structured logging package I would expect a far simpler Logger API with a Log method something like

    Log(pairs ...KeyValuePair) error
or maybe

    Log(r Record) error
and no concept of Attrs or Groups or (explicit) Handlers.


> An Attr is a key-value pair.

So s/KeyValuePair/Attr/g?

Re varargs vs record, varargs are usually used in Go because they're very ergonomic to write inline, so they work very nice with loggers.

Group sounds like a fairly arbitrary concept, I agree, but still very reasonable for something that's supposed to standardize things. It would totally make sense for e.g. different libraries to each inject their own group with key-values into the context.

I'm not sure what's the problem with Handlers? This is supposed to be a standard library package that is setting... well, standards that other libraries will adhere to. Handlers let you plug your own output formats.

The package is not supposed to be "as bare bones as possible". Just "good default others will be able to integrate and compose with".

I've used my fair share of structured loggers too, and this really is par for the course and a completely reasonable set of things to include in a structured logging package.

---

It's worth noting that the previously-mentioned Zap, which is probably the most popular structured logging library in go right now, contains all the same concepts, just differently named:

Attr -> Field

Group -> Namespace

Record -> Entry

Handler -> Encoder/Writer


There is no reason to distinguish a Handler from a Logger. A Handler is just a Logger which has a side effect, nothing more.

There is no reason to distinguish an Attr or Group or Record from a (set of) KeyValuePairs. They're all the same thing.

And I don't agree that Zap is the most popular structured logging library. It's one of many, none are the clear winner.

But, whatever.


> There is no reason to distinguish a Handler from a Logger.

A Handler is an abstract type, a Logger is a concrete one. A Logger lets you have a large user surface (and without virtual calls); and Handler, a small implementer surface. (The handler is not much beyond what you proposed, plus an optimization to avoid materializing complete record+KV pairs if the level isn't met.)

The concepts also map fairly directly to logrus and zerolog (which has an absolutely enormous surface to avoid boxing).


I don't think so.

A Handler is something that transforms log events to concrete outputs (files, disks, writes to stderr, etc.) via side effects.

A Logger is what programs use to produce, transform, etc. log events.

Both are abstract interfaces with arbitrarily many possible implementations. Both are defined in terms of their user-facing capabilities. I'm not sure how those definitions would differ. Both accept log events and do something with them. That interface is the same.


If slog needs to avoid virtual calls or boxing or whatever else for reasons of performance, I guess I feel very strongly that it should do so in a way that doesn't impact users, and shouldn't let these concerns influence the API design.


"Ignore performance when designing your API" is an obvious non-starter. I don't think you're serious.

More specifically, users want the surface area Logger has. They do not want a single function with a complex object specification, just like they don't want a function per type.


There is obviously an enormous difference between "don't let performance influence your API (to become awkward or nonidiomatic)" and "ignore performance when designing your API".


But you've offered no actual criteria why the API is bad! Again, most logging users want something to handle nested record contexts for them and per-level methods, most logging implementers want a minimum number of entry points, and that's exactly what this API offers. (It's definitely not zerolog-style where it sacrifices ease of use for performance.)


Structured logging means logging where discrete events are sets of key=val pairs, and nothing more.

What is a record context? What is a level? These are concepts built on top of a structured logger, which manifest as specific key=val pairs in a given structured log event. They don't need -- and shouldn't have! -- first-order representation in a low-level structured logger API.

What is an entry point? If I'm writing some structured logger implementation, I expect that I should need to provide precisely one method:

    func (x *MyThing) Log(<set of key=val pairs>) error
Anything more is cruft.


Zap author here :) Logrus remains quite a bit more popular than zap. (Does anyone remember the epic tire fire when Sirupsen briefly changed the casing of his GitHub username?)

From my perspective, the problem with the KeyValuePairs API is that it’s inescapably slow and allocation-heavy. I’m glad that a logging API built into the standard library is usable in more performance-sensitive contexts. It’s easy enough to wrap it up in a KeyValuePairs-style API if you’d prefer that, and you’ll now have the ability to unwrap your logger and interop with other libraries that expect the standard slog interface.


If you meant on other ecosystems Microsoft.Extensions.Logging


Is anyone experimenting with the WASI feature?


Sure. For example serving TCP/HTTP with net.FileListener works, which really opens things up at least for me.

https://stackoverflow.com/a/76711399


Is it just me or is the new import initialization algorithm n^2 in the worst case? Surprised by that.


That's just the easy specification of the algorithm, it's implemented by a heap and counter on each package so I assume it's more like n lg n.


But in the worst case, initializing one package causes the count in each of the remaining packages to update. Maybe they have a clever way of doing that. Or maybe this is very unlikely to happen in practice.


If you take n as packages (vertices) you can have n² dependencies (edges), each of which needs to be looked at once no matter what. So if you count that way n² is unavoidable in any dependency sorting. (Usually for such algorithms I see n = |v| + |e|, or max(|v|, |e|), or just left as |v| and |e|.)


wow this was a very "we made everything a little better", lot of thanks to the golang devs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: