> When printing very deep stacks, the runtime now prints the first 50 (innermost) frames followed by the bottom 50 (outermost) frames, rather than just printing the first 100 frames. This makes it easier to see how deeply recursive stacks started, and is especially valuable for debugging stack overflows.
This is a very nice QoL improvement, I wish more languages made the change.
>>> a()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in a
File "<stdin>", line 3, in a
File "<stdin>", line 3, in a
[Previous line repeated 894 more times]
File "<stdin>", line 2, in a
File "<stdin>", line 2, in b
File "<stdin>", line 2, in c
File "<stdin>", line 2, in c
File "<stdin>", line 2, in c
[Previous line repeated 97 more times]
RecursionError: maximum recursion depth exceeded
Sadly it's not smart enough to handle indirect recursion, so in those cases you get a thousand lines of garbage.
Well its normal in Java and from what I have seen despite being advised against it a lot of folks coming from Java to Go have brought along Java style heavily abstracted code, framework first design patterns with them. This can definitely lead to very deep stack traces.
All I could find for context about what caused it (which is not reflected in that issue)
> Go command
The directory $GOROOT/pkg no longer stores pre-compiled package archives for the standard library: go install no longer writes them, the go build no longer checks for them, and the Go distribution no longer ships them. Instead, packages in the standard library are built as needed and cached in the build cache, just like packages outside GOROOT. This change reduces the size of the Go distribution and also avoids C toolchain skew for packages that use cgo.
Other two comments are wrong. The correct answer is that the compiler now builds everything on demand if it's not already built. This now includes the standard library as well. Previously built versions of standard library modules were being shipped.
It seems like it rebuilds itself with PGO. I can't tell if I'm being paranoid/didn't only include stdlib in something, but I ran some random programs, restarted my computer and the install is now at 320mb, vs. 213 when I first untarred it.
> PGO in Go applies to the entire program. All packages are rebuilt to consider potential profile-guided optimizations, including standard library packages.
Go1.21 uses PGO by default, if the file default.pgo exists.
> The -pgo build flag now defaults to -pgo=auto, and the restriction of specifying a single main package on the command line is now removed. If a file named default.pgo is present in the main package's directory, the go command will use it to enable profile-guided optimization for building the corresponding program.
Possibly compiler/linker improvements. For example, FTA:
“In Go 1.21 the linker (with help from the compiler) is now capable of deleting dead (unreferenced) global map variables, if the number of entries in the variable initializer is sufficiently large, and if the initializer expressions are side-effect free.”
There also may be differences in debug symbols in the various binaries.
The backwards compatibility promise (as always) is really nice. Java/JVM is the only other language/platform that I can think of that does more or less the same. Using both extensively by the way.
Kind of, there are plenty of breaking changes as well, three that quickly come to mind, removal of exception specifications, std::auto_ptr<>() removal, and the new semantic changes due to the way <=> behaves.
For the former two, breakages have replacements and are a vestige noone used for a long time. Besides that, compilers have switches to treat throw() as noexcept.
As for the latter, those breakages are relatively rare.
Many other languages give the backwards compatibility promise. For instance Vlang (https://vlang.io), does as well. But in their case, they limit it to a 3 year backward compatibility promise, which kind of makes more sense (also due to them being in beta). It gives them more flexibility to make needed changes to the language, but enough stability to limit issues for users. Though in general, such more open languages are going to be more responsive to users and their requests/needs, because of how they are structured.
How so? What breaking changes have you hit? It has the same guarantee and takes it extremely seriously. The only code I know of that compiled on a stable post-1.0 compiler but broke was because of bugs, even those cases are rare.
That is because features might be feature complete and usable, but not consider stable yet and thus not part of stable builds. This because rust takes backwards compatibility seriously, it isn't an example of it breaking it.
> Go 1.21 now defines that if a goroutine is panicking and recover was called directly by a deferred function, the return value of recover is guaranteed not to be nil. To ensure this, calling panic with a nil interface value (or an untyped nil) causes a run-time panic of type *runtime.PanicNilError.
> To support programs written for older versions of Go, nil panics can be re-enabled by setting GODEBUG=panicnil=1. This setting is enabled automatically when compiling a program whose main package is in a module with that declares go 1.20 or earlier.
An explanation of why this change was implemented is contained in the description of the implementing change, https://go-review.googlesource.com/c/go/+/461956. In short, calling `panic(nil)` before would make `recover` return nil, which is what's returned when there is no panic. `recover` doesn't have a `panicArg, ok = recover()` return value variant, so they fixed it this way.
I'm a recent convert and enjoying it quite a bit for webdev. There's lots of books out there, but I found let's go and let's go further to be great project based books
I've mainly used Java which doesn't suffer from this particular problem. But I've often heard Java criticized on the grounds it captures values not variables, and that most other languages apparently capture variables. I remember learning Lisp at uni and being taught that it captured variables, for example.
So do other languages suffer from this problem as well? How do they deal with it?
Note that there are two separate components to the issue:
1. the interaction between mutable locations and closures
In most languages with mutable bindings, if you close over a variable you're closing over a "cell" and will see future modifications to it. This is the case in Go, Python, Ruby, Javascript, C#, ...
A few languages avoid it e.g.
- Java, not because it "captures values" but because it only allows capturing "final" variables so you can't update the bindings (you can still mutate in place when capturing reference types)
- C++, because you have to specify the capture mode and it'll only be affected if you capture by reference (which you wouldn't do for a loop variable), so the likelihood of hitting this issue is pretty low
- Rust, because even if you capture by reference you can't modify a binding with an outstanding reference
2. the scoping of loop variables, if you're capturing the loop variable but each iteration has a different version of the loop variable then you don't really mind too much, because each closure will capture its own loop variable, this doesn't fix the above issue but it does fix the most common occurrence of it
That's what Go is doing, other languages which have done that are C# (when using a `foreach` loop), Javascript (when using `let` or `const` for the loop variable), ...
Also Ruby, kind-of: using the `each` method for iteration is very common, and since then the loop variable is a parameter to the block it's basically a per-iteration local, "for...in" still has the issue. I guess C# also has that when using the linq ForEach method but I don't know how common that is.
Probably worth pointing out that C# used to have Go's behavior and they changed it (a rare breaking language change) specifically because it was such a notorious pitfall.
I think Go is doing the right thing here. I'm somewhat surprised they didn't always do this. (Dart has created a new loop variable for each iteration all the way back since before 1.0.)
Similarly in javascript, “let” and “const” we’re added specifically with block and per-iteration scoping, because the ever increasing popularity of callbacks for async was making the function-scoped var more infuriating every day.
In some cases, Common Lisp makes it implementation-defined. E.g. in (dolist (item list) ...) it is implementation-defined whether x is a freshly bound lexical, or a stepped variable.
In Lisp, you can easily write macros that have whatever semantics you want. You can easily write a my-dolist which is like dolist but freshly binds the item variable for every iteration, and next to it write a my-dolist2 which assigns a single variable.
In languages like Go, the providers of your language provide all the syntax from behind a wall, and make these decisions for you. Knights in shining armor duke it out on the rooftop of an ivory tower, while lower vassals blog incessantly about the important work being done and decisions being made.
By the way, MacCarthy's ancient Lisp (Lisp 1, Lisp 1.5) didn't have lexical scope, so it would not have made any difference whether each iteration stepped the variable with setq or bound it with let. There would just be the one dynamic variable in all cases, not captured by any lexical closures (those being nonexistent). You'd want to bind it at least once with let so the prior value is restored when the loop is done.
In Common Lisp, if we use a special variable as the loop variable, it likewise won't make a difference whether it is stepped or bound each time, since the binding isn't lexical.
Macros is exactly what makes Lisp a one-man only language for every person unless you are really careful.
That, combined with the fact that things are often poorly documented is what IMHO ruins the language ecosystem.
I want to like Lisp and I love its interactivity but I find the language a bit more "construction material" language than a ready-to-use immediately language. What is bad about that is exactly what you mention: everyone will come up with a set of macros that others cannot fully grasp.
A poorly documented and tested libraries will give you problems no matter what kinds of entities it defines. Someone who writes macros that nobody can easily grasp will also write functions nobody can easily grasp.
If you have to reverse engineer someone's macros, you have all of the following going for you
The macros run at compile time and then go away (unless the program is based on a dynamic compiling paradigm). They have no behaviors at run-time; their generated code does. No matter how complicated the macro, you can just run it and capture the output code, and through multiple examples get an understanding of what it's doing.
If someone writes a buggered function, you may have to end up debugging it on a target system, perhaps an embedded one.
The sky is the limit there. If it's a function in a kernel, you may have to debug some race condition between it, some interrupt handler and a piece of hardware.
No such thing ever happens when debugging a macro itself.
A macro doesn't interact with complex application state, because that doesn't exist. You never have to attach some gigabyte database and get some objects into the right state before reproducing some expansion problem in a macro. Code generated by a macro could be involved in a problem like that, possibly as a root cause, but tracing through the macro itself isn't.
The people who write awful macros are usually not smart enough to do anything that is actually hard to understand. The problems you will find are lack of hygiene, multiple evaluations and such.
> A poorly documented and tested libraries will give you problems no matter what kinds of entities it defines. Someone who writes macros that nobody can easily grasp will also write functions nobody can easily grasp
Lisp makes this so easy to write macros and they have been promoted to be the "my-powerful-dsl-is-the-right-way-to-solve-a-problem" that, IMHO, it works actively against team work unless it is really well-dcumented. Now, mix that with the fact that code is not usually deeply documented and you get a cultural problem into the whole ecosystem.
> A macro doesn't interact with complex application state, because that doesn't exist. You never have to attach some gigabyte database and get some objects into the right state before reproducing some expansion problem in a macro. Code generated by a macro could be involved in a problem like that, possibly as a root cause, but tracing through the macro itself isn't
Tracing the macro can be as difficult as its expansion. If it gets obtuse it will add a lot of time to your workflow. I know they are powerful but it is probably the last feature I would use in a team, with a lot of care, and for trivial things or for really justified cases and with good documentation. If your code is full of macros, forget about readability, you threw it out through the window.
> The people who write awful macros are usually not smart enough to do anything that is actually hard to understand. The problems you will find are lack of hygiene, multiple evaluations and such
Because writing macros is not that simple in the first place. I find much easier to write regular code and use the well-known macros than grasp anonymous macro code. There are things you can only do with macros though, like lazy evaluation. But macros must be managed with extra care. For example, maybe instead of a macro it is better to use a higher order function with a closure to keep the code understandable than burying things in layers of macros.
Specifically, I think, any language with closures where the loop variable exists in a scope enclosing all loop iterations with a value that is updated each iteration rather than a fresh variable in a scope local to each loop iteration has this problem.
In Ruby this wouldn't affect idiomatic looping methods like #each, but would seem likely to affect the less-idiomatic for loop, which is built on #each but has the control variable in the surrounding scope. (Not 100% certain how it desugars though.)
> In Ruby this wouldn't affect idiomatic looping methods like #each, but would seem likely to affect the less-idiomatic for loop, which is built on #each but has the control variable in the surrounding scope. (Not 100% certain how it desugars though.)
I don't know how it desugars, but it is indeed affected when using `for...in`
I didn't know the answer to this, but it seems like this actually behaves closer to `funcs.append("""print(i)""")` since
funcs = list()
for i in range(10):
funcs.append(lambda: print(i))
del i
for f in funcs:
f()
actually NameErrors during f() so I don't think it is actually closing over the i as much as the i was left in scope and the code was seemingly lazy evaluated
Python has real closures. In my example, it is closing over i. In your example, you're deleting the closed-over variable. It's a bit weird, and Python is the only language that I know of that lets you do this.
I don't think that's limited to just `var`, as a `let` outside of the `for` will do the same
let funcs = [];
;(() => {
let i;
for (i = 0; i < 10; i++) {
funcs.push(function() { console.log('i=',i); })
}
})()
for (let f of funcs) {
f.call(null);
}
(I used an IIFE to hide the "let i" from the .call to ensure it wasn't late binding to the name like the python example did)
adding min/max to a language made 10 years ago is kinda crazy when you think about it.
Really looking log/slog package, it seems neat and was deeply needed.
Hah, I just went down a whole rabbit hole on super-logarithms and tetration wondering why you found it so critical as to warrant standard-library inclusion.
While it's true that it had math.Min/Max, it's not just the inconvenience of castings that this solves. You can't cast uint64 to float64 and expect the function to return the correct result, e.g.:
The correct way to write min/max before was to use comparison operators. math.Min/Max was there because it requires tricky special cases for handling -Inf, Inf, NaN, and negative zero.
The log/slog package is huge. I'm super happy to see them start-again to express opinions and solve more problem with just the standard library; and I hope to see more of it. This was Go's Thing at 1.0; you could build a production ready, large scale HTTP service with nothing but the standard library; and, probably, a structured logging package like Zap. Now; looks like we may not need it!
If anyone on the Go team is reading this; thanks! And, as hacker news must always be the armchair commentator, if I may ask for an addition in go 1.22: encoding/yaml. Build it in. Go and YAML are the languages of devops, and I've written so many scripts with JSON as a configuration file only because Go has a JSON parser built-in, but not YAML.
Helloooooo riscv64-linux support! EDIT: lol this fickle site. Not my fault they didn't publicize the fact that they're now publishing riscv64 binary releases (allowing distros to bootstrap riscv64 targets now).
Go is absolutely committed to the cryptography standard library. What's happening is that we're deprecating legacy protocols and APIs to invest resources on packages that better align with the Cryptography Principles [0].
In particular, crypto/elliptic was an unfortunate API that has been deprecated in favor of the new crypto/ecdh. Most applications can migrate (on their own time, as we don't break backwards compatibility even for deprecated packages) and get better security and performance. (A very small portion of applications might need lower level applications, in which case they can use third party modules based on the stdlib internals, like filippo.io/nistec.) You can read more on my Go 1.20 [1] and Go 1.21 [2] posts.
1) speed to update it in the case of a bug / security issue.
2) backwards compatibility.
Getting a new Go release out is likely a lot more time consuming than bumping up the crypto library outside of there. And those libraries do not require full backwards compatibility compared to the Go standard library. So they can do breaking changes if need back without breaking the Go contract.
Glad to have my wild speculation disproven :) And glad that becoming a full time independent maintainer is working out for you!
Though, in hopes you see this, I have to ask: Is there any sort of plan for if you wanted to step down from a maintainers role of the go crypto packages?
You're sort of known as "the" go crypto person, so I guess it occurred to me that your bus factor might be quite high. Any thoughts you can give to assuage fears in that regard? Is there anything the community can do to share the load, short of becoming cryptography experts?
How comfy is go for general native performance coding tasks? Basically looking for a C/C++ replacement. (I don't really like how Rust looks syntactically, and prefer GC over manual memory management.)
If you're looking for the absolute top speed, it's not as fast as C++ or rust. Assembly functions can't be inlined, for example. But go allows you to control your memory structures and that gets you good cache utilisation, compared to, say java. Go is plenty fast enough for most applications.
As C replacement, I would say relatively ok, specially if you look at it from the point of view of Limbo in Inferno.
As C++ replacement, I would stay is too lacking in features. For that I would rather reach out to C#, D, Java (with OpenJ9/Graal), or even Haskell/OCaml.
It’s generally designed to be the replacement you’re looking for. It comes from google, who has a lot of C++ code. Its memory managed, but relatively fast. You can’t beat languages that are designed for speed (c, rust) but it’s not python, and it’s closer to rust than python in terms of speed. Similar to Java probably. If you’re writing a web server or anything similar you’ll be fine. I wouldn’t reach for it to crunch numbers though (eg a ML model, image manipulation).
I think it's a reasonable first pass at implementing generic functions.
I'd love to see Map/Filter/Reduce, but starting with the simplest set of generic functions and getting people comfortable with them is probably a good idea.
Go idiomatically is very imperative and introducing a new programming style at the same time as introducing generics might be too much change at once.
I'd also like to see an option and a result type at some point so we can stop doing nil checks all over the place.
Makes sense given the idea of getting some form of iterators support into the language, collection-specific HoFs means more duplication and worse composition as the intermediate collections generally need to be allocated on every step, and often can't be optimised away because you're stuck with your defined order of evaluation.
>crypto/ecdsa PublicKey.Equal and PrivateKey.Equal now execute in constant time.
Glad to see this pushed. We went to look how Go was handling constant time comparisons and noticed it was not. This is not considered a serious security issue, as
attacker controlled private key attacks are generally considered out of scope, but it's still nice for the library to be safe. (See https://github.com/golang/go/issues/53849)
The “Loopvar” experiment looks like backward compatibility nightmare.
As confusing as the existing behaviour could be, the new experiment is asking for trouble.
It’s a breaking change, but rsc did very thorough testing of publicly available code and internal Google code and found essentially only bug fixes. But I think the risk is why this is an experiment and not a straight up change. Also, the new behavior would only be enabled on modules that are marked as Go 1.22 or above, not program globally, iiuc. So to me, this is very thoughtfully planned and unlikely to be any kind of nightmare
When C# shipped this change I was working on a large C# codebase and we were very worried about this. After auditing the entire codebase and checking every for loop, we found two pre-existing bugs in code which expected the 5.0 behavior and zero places which relied on the old behavior.
Everyone that I've ever talked to who went through the transition has a similar story. In practice no one even incidentally relied on the old behavior. Go rolling it out as an experimental opt-in feature is a good idea, but it'll probably turn out to be unnecessarily cautious and they could have gotten away with just unconditionally enabling it.
Is it? The experiment is just that, a way to test your packages for breakage, the proposition to opt in via langfile is similar to Rust edition systems which work fine.
Way back when C# actually went through that change unconditionally (LangVersion was not a thing yet), and there was barely any breakage.
The existing behavior is extremely unlikely to be helpful, even accidentally. I will be shocked if anyone actually hits a problem related to this change. It may well fix more bugs than it introduces.
Proving that is impossible. As said before, rsc has analyzed publicly available source and found the change fixed bugs in the ones where it made any difference.
If you want to argue, you really need to show an example of where the change causes a bug in any scenario that matters. With a real, preexisting, program.
If it is impossible, why do you ask me that question?
I totally disagree with your logic. I think you don't have the right logic here.
At least, the proposal should be accepted when the experiment period has lasted for a year. But now, the proposal has been accepted before the experiment period started. Is it weird?
Proving the non-existence of a proprietary source file somewhere that contains code which will become buggy with this experiment is impossible; we don't know all the source code in the world.
Proving the existence of a program that contains code which will become buggy is easy: share the code. Show actual program code, not a hypothetical; nobody is saying the experiment doesn't change behavior, people are saying it only changes behavior that doesn't matter to real world code.
On real world code discovered so far, the experiment would fix bugs.
Your logic is too illogical. It is the proposal's supporters' responsibility to prove the proposal doesn't do harm. If they are unable to do this, please don't pretend that this has been proven, and please just stop making any arguments.
It is not my (and all other Go programmers') responsibility to do this. Here, I just shows the possibility that the proposal will do harm, and help others to find the broken cases easily. However, surely, Go programmers, in their spare time, can help the proposal's supporters to do this. But, again, it is strange that the proposal has been accepted before such attempts are made. :D, so weird.
And, please read my tweets, there are several factors which are unrelated to finding broken code. In fact, the change will lead to more error-prone, instead of less error-prone.
Vendoring might help with building old code. It stores a copy of depencies in a directory which you can include in git. https://go.dev/ref/mod#vendoring
As for upgrading old libraries which might have broken backwards compatibility, I can't offer much.
Vendoring creates a copy of the dependency files. So no internet connection would be required to build it IF (and that is a big if) the project dependencies were vendored and versioned back then.
Interestingly, commiting vendored files is a common point of contention between devs. Some argue that it adds bloat to the project repo.
Unless I'm working with TB sized monorepos, I always vendor and commit, because hardware is cheap and my time valuable. I have 10 year old projects that build just fine without internet connection because all deps are commited with the project.
As a bonus I get a free diff of every single line that changed when a depency is updated.
Logging always felt like something that (1) almost every project needs, and (2) there are too many 3rd party packages, with no "de facto" choice.
This is particularly painful when working with 3rd party library code. There's a high chance that the consumer of library code uses a different logging package to the library author.
Having structured logging in the stdlib is fantastic, because now there is a "go way" to this that libraries can use.
Does this mean that all the third party logging packages will likely die out in the future? Or will they likely implement adapters to be compatible with the std interface?
1 "We expect existing logging packages to coexist with this one for the foreseeable future."
2 "We expect that this proposal’s handlers will be implemented for all popular logging formats and network protocols, and that every common logging framework will provide a shim from their own backend to a handler"
What Go structured logging libraries predate Zap's 2016 creation? Only one I can remember is Logrus, which was using type Fields map[string]interface{}, the bad qualities of which are kinda the whole reason for Zap's creation, and slog follows the Zap-style API[1].
When skimming it I mostly saw the common API structure that I usually see in libraries / write myself when I need a logger, so I generally welcome the standardization.
- Logger -- created by New, accepting a Handler, providing a fixed set of level-based logging methods, asserting concepts of Attrs and Groups, not parameterizable by consumers
- Handler -- with two default implementations, asserting concepts of Attrs Groups and Records, parameterizable by consumers as long as they follow the semantics defined by the interface
- Attr -- arbitrary concept that maps to a k/v pair in a log record
- Group -- arbitrary concept that namespaces a set of k/v pairs in a log record
- Record -- arbitrary concept that requires a timestamp (expensive to compute), a level (one of a specifically defined enum which cannot be changed), and a PC stack pointer (obvious issues there)
I've never seen a logging package which meets these requirements.
What else would you expect from a structured logging package?
To me it absolutely makes sense as the default and standard for 99% of applications, and the API isn't much unlike something like Zap[0] (a popular Go structured logger).
The attributes aren't an "arbitrary" concept, they're a completely normal concept for structured loggers. Groups are maybe less standard, but reasonable nevertheless. The timestamp is not required - the documentation specifies it can be left as the zero value and shall be ignored in that case.
I'm not sure if you're aware that this is specifically a structured logging package. There already is a "simple" logging package[1] in the stdlib, has been there for ages, and isn't particularly fast either to my knowledge. If you want really fast you take a library (which would also make sure to optimize allocations heavily).
Re varargs vs record, varargs are usually used in Go because they're very ergonomic to write inline, so they work very nice with loggers.
Group sounds like a fairly arbitrary concept, I agree, but still very reasonable for something that's supposed to standardize things. It would totally make sense for e.g. different libraries to each inject their own group with key-values into the context.
I'm not sure what's the problem with Handlers? This is supposed to be a standard library package that is setting... well, standards that other libraries will adhere to. Handlers let you plug your own output formats.
The package is not supposed to be "as bare bones as possible". Just "good default others will be able to integrate and compose with".
I've used my fair share of structured loggers too, and this really is par for the course and a completely reasonable set of things to include in a structured logging package.
---
It's worth noting that the previously-mentioned Zap, which is probably the most popular structured logging library in go right now, contains all the same concepts, just differently named:
> There is no reason to distinguish a Handler from a Logger.
A Handler is an abstract type, a Logger is a concrete one. A Logger lets you have a large user surface (and without virtual calls); and Handler, a small implementer surface. (The handler is not much beyond what you proposed, plus an optimization to avoid materializing complete record+KV pairs if the level isn't met.)
The concepts also map fairly directly to logrus and zerolog (which has an absolutely enormous surface to avoid boxing).
A Handler is something that transforms log events to concrete outputs (files, disks, writes to stderr, etc.) via side effects.
A Logger is what programs use to produce, transform, etc. log events.
Both are abstract interfaces with arbitrarily many possible implementations. Both are defined in terms of their user-facing capabilities. I'm not sure how those definitions would differ. Both accept log events and do something with them. That interface is the same.
If slog needs to avoid virtual calls or boxing or whatever else for reasons of performance, I guess I feel very strongly that it should do so in a way that doesn't impact users, and shouldn't let these concerns influence the API design.
"Ignore performance when designing your API" is an obvious non-starter. I don't think you're serious.
More specifically, users want the surface area Logger has. They do not want a single function with a complex object specification, just like they don't want a function per type.
There is obviously an enormous difference between "don't let performance influence your API (to become awkward or nonidiomatic)" and "ignore performance when designing your API".
But you've offered no actual criteria why the API is bad! Again, most logging users want something to handle nested record contexts for them and per-level methods, most logging implementers want a minimum number of entry points, and that's exactly what this API offers. (It's definitely not zerolog-style where it sacrifices ease of use for performance.)
Structured logging means logging where discrete events are sets of key=val pairs, and nothing more.
What is a record context? What is a level? These are concepts built on top of a structured logger, which manifest as specific key=val pairs in a given structured log event. They don't need -- and shouldn't have! -- first-order representation in a low-level structured logger API.
What is an entry point? If I'm writing some structured logger implementation, I expect that I should need to provide precisely one method:
func (x *MyThing) Log(<set of key=val pairs>) error
Zap author here :) Logrus remains quite a bit more popular than zap. (Does anyone remember the epic tire fire when Sirupsen briefly changed the casing of his GitHub username?)
From my perspective, the problem with the KeyValuePairs API is that it’s inescapably slow and allocation-heavy. I’m glad that a logging API built into the standard library is usable in more performance-sensitive contexts. It’s easy enough to wrap it up in a KeyValuePairs-style API if you’d prefer that, and you’ll now have the ability to unwrap your logger and interop with other libraries that expect the standard slog interface.
But in the worst case, initializing one package causes the count in each of the remaining packages to update. Maybe they have a clever way of doing that. Or maybe this is very unlikely to happen in practice.
If you take n as packages (vertices) you can have n² dependencies (edges), each of which needs to be looked at once no matter what. So if you count that way n² is unavoidable in any dependency sorting. (Usually for such algorithms I see n = |v| + |e|, or max(|v|, |e|), or just left as |v| and |e|.)
This is a very nice QoL improvement, I wish more languages made the change.