I've been thinking about this for a minute, and I think if an American were to say "why", and take only the most open vowel sound from that word and put it between "k" and "m", you get a pretty decent Australian pronunciation. I am an Australian so I could be entirely wrong about how one pronounces "why".
C++, Linux: write an audio processing loop for ALSA
reading audio input, processing it, and then outputting
audio on ALSA devices. Include code to open and close
the ALSA devices. Wrap the code up in a class. Use
Camelcase naming for C++ methods.
Skip the explanations.
```
Run it through grok:
https://grok.com/
When I ACTUALLY wrote that code the first time, it took me about two weeks to get it right. (horrifying documentation set, with inadequate sample code).
Typically, I'll edit code like this from top to bottom in order to get it to conform to my preferred coding idioms. And I will, of course, submit the code to the same sort of review that I would give my own first-cut code. And the way initialization parameters are passed in needs work. (A follow-on prompt would probably fix that). This is not a fire and forget sort of activity. Hard to say whether that code is right or not; but even if it's not, it would have saved me at least 12 days of effort.
Why did I choose that prompt? Because I have learned through use that AIs do will well with these sorts of coding tasks. I'm still learning, and making new discoveries every day. Today's discovery: it is SO easy to implement SQLLite database in C++ using an AI when you go at it the right way!
That rely heavily on your mental model of ALSA to write a prompt like that. For example, I believe macOS audio stack is node based like pipewire. For someone who is knowledgeable about the domain, it's easy enough to get some base output to review and iterate upon. Especially if there was enough training data or you constrain the output with the context. So there's no actual time saving because you have to take in account the time you spent learning about the domain.
That is why some people don't find AI that essential, if you have the knowledge, you already know how to find a specific part in the documentation to refresh your semantics and the time saved is minuscule.
Write an audio processing loop for pipewire. Wrap the code up in a
C++ class. Read audio data, process it and output through an output
port. Skip the explanations. Use CamelCase names for methods.
Bundle all the configuration options up into a single
structure.
Run it through grok. I'd actually use VSCode Copilot Claude Sonnet 4. Grok is being used so that people who do not have access to a coding AI can see what they would get if they did.
I'd use that code as a starting point despite having zero knowledge of pipewire. And probably fill in other bits using AI as the need arises. "Read the audio data, process it, output it" is hardly deep domain knowledge.
Can you setup entra authentication with PgAdmin? I'm more of a MS Sql person so I don't know, but if not the security improvement from this would be a huge improvement
I would be curious about context window size that would be expected when generating ballpark 20 to 20 tokens per second using Deepseek-R1 Q4 on this hardware?
The evidence is mixed, but some studies (e.g. Jaeggi) did find transfer effects from n back training to fluid intelligence.
It only takes 40 mins a day for 8 weeks to test it out. Much less time than the commitment to learn a new language.
Having tried it, I wouldn't be surprised if the mixed results were due to improper adherence and misunderstanding of how n back works by some study participants. In other words, I think it's possible that results would be less mixed for someone who is already starting from a point of solid intelligence and who is driven enough to put in the hard,focused work to get to higher n back levels.
I see in another comment thread you mentioned downloading the VM iso, presumably from a central source. Your comment in this thread didn't mention that so perhaps this answer (incorrectly) assumes the VM you are talking about was locally maintained/created?
Yeah! We are planning to add blocks, steals, and rebounds in the coming months. I think fouls is still a reach goal right now (hard even for humans) but maybe one day.
Re: capturing the whole court, I think we are considering supporting panning of the camera (in addition to syncing). We focus pretty heavily on making sure things work with just a normal phone so it's more accessible to everyone. We are definitely starting bottom up but maybe one day we will get to more high tech hardware.
In relation to user experience on first try...I just wanted to test it out for 30 second to see if it's worth keeping. I haven't tested now, will have to wait until I get an hour free later because there are a few roadblocks.
- You require email and password registration and to click a verification link. Not too bad
- When first launched the app asks you to allow mic permissions by clicking settings. However in Hooper settings on Android there is no mic permissions option. Bad.
- You require 5gb free to use the product at all. Understandable given storage is required for video but how about reducing that so users can do like a 30-second test for first use? That way they don't have to spend an hour going through their phone's media to see what they want to keep and what they can delete.
You'd be amazed at how many people (like me lol) have almost full storage on their device most of the time.
Edit: oh....it's not a setting for mic permissions, it's a general permissions setting and if you touch that you can add additional permissions. Based on the text of the popup I hadn't guessed that was the right place to look and I hadn't realised it was possible to add permissions by touching that area of the settings menu.
Wow that's terrible. I'm curious as to whether your contract with them allows for meaningful compensation in an event like this or is it just limited to the price of the software?
reply