Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok your comment made me double check our benchmarking script in Go. Can confirm we didn't instantiate a new client with each request.

For transparency here's the full Golang benchmarking code and our results if you want to replicate it: https://gist.github.com/KelseyDH/c5cec31519f4420e195114dc9c8...

We shared the code with the Tigerbeetle team (who were very nice and responsive btw), and they didn't raise any issues with the script we wrote of their Tigerbeetle client. They did have many comments about the real-world performance of PostgreSQL in comparison, which is fair.



Thanks for the code and clarification. I'm surprised the TB team didn't pick it up, but your individual transfer test is a pretty poor representation. All you are testing there is how many batches you can complete per second, giving no time for the actual client to batch the transfers. This is because when you call createTransfer in GO, that will synchronously block.

For example, it is as if you created an HTTP server that only allows one concurrent request. Or having a queue where only 1 worker will ever do work. Is that your workload? Because I'm not sure I know of many workloads that are completely sync with only 1 worker.

To get a better representation for individual_transfers, I would use a waitgroup

  var wg sync.WaitGroup
  var mu sync.Mutex
  completedCount := 0

  for i := 0; i < len(transfers); i++ {
    wg.Add(1)
    go func(index int, transfer Transfer) {
     defer wg.Done()

     res, _ := client.CreateTransfers([]Transfer{transfer})
     for _, err := range res {
      if err.Result != 0 {
       log.Printf("Error creating transfer %d: %s", err.Index, err.Result)
      }
     }

     mu.Lock()
     completedCount++
     if completedCount%100 == 0 {
      fmt.Printf("%d\n", completedCount)
     }
     mu.Unlock()
    }(i, transfers[i])
   }

  wg.Wait()
  fmt.Printf("All %d transfers completed\n", len(transfers))
This will actually allow the client to batch the request internally and be more representative of the workloads you would get. Note, the above is not the same as doing the batching manually yourself. You could call createTransfer concurrently the client in multiple call sites. That would still auto batch them


Appreciate your kind words, Kelsey!

I searched the recent history of our community Slack but it seems it may have been an older conversation.

We typically do code review work only for our customers so I’m not sure if there was some misunderstanding.

Perhaps the assumption that because we didn’t say anything when you pasted the code, therefore we must have reviewed the code?

Per my other comment, your benchmarking environment is also a factor. For example, were you running on EBS?

These are all things that our team would typically work with you on to accelerate you, so that you get it right the first time!


Yeah it was back in February in your community Slack, I did receive a fairly thorough response from you and others about it. However then there were no technical critiques of the Go benchmarking code, just how our PostgreSQL comparison would fall short in real OLTP workloads (which is fair).


Yes, thanks!

I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.

IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: