I’m beginning to think maybe I’m the only one that read this whole thing. The firmware storage isn’t the security through obscurity problem being talked about here. The hardcoded TLS private key definitely is though. And yes, it deserves shaming… terrible practice leads to terrible outcomes. Nobody is surprised that this is coming from tp-link at this point though.
I think maybe you’re reading this wrong. Reverse-engineering blog posts like this are just a fun and instructive way of telling the story of how someone did a thing. Having written and read a bunch of these in the past myself, I found this one to be a great read!
Edit: just want to add, the “how I got the firmware” part of this is also the least interesting part of this particular story.
It’s notable and interesting this research is coming out of University of Cambridge. Cambridge Analytica spun out of academia there too?
Question for folks here who may be familiar: it seems like there’s a strong connection to research (and in the case of CA, commercial application of said research) around social media manipulation and propaganda in the digital age.
Is there any six-degrees type connection to the people doing this research and those involved with the roots of CA? Not as in the same bad actors (which, tbh yes, I consider CA to have been), but as in perhaps the same department and/or professors etc.
Just want to say: Thanks! I was waiting for this article.
Thanks to Ernie Smith, to tedium.co, to HN, to community.
This is the kind of curious and intelligent response to FUD that I want to find whenever major news outlets start an insane new spin-cycle (as increasingly is the way of things in the world).
I’ll let the HN comment thread spin out (as it must), but amidst that, I just want to say that this right here is the reason I still keep coming back to this place and read all of it. So, thanks!
This is really, well... douchey. Emptying anything I have in Coinbase asap (and yes I read the whole thing)
I wonder how likely it is for CEO roles to get taken over by a sophisticated LLM at this point. I’d wager we’d see a 20x increase in value. I use and value llms in my coding and research workflows already but to fire people for careful and slow adoption speaks very poorly to individual and company maturity.
Yes this! The observation that being specific versus general in the problems you want to solve is a better start-up plan is true for all startups ever, not just ones that use LLMs to solve them. Anecdotal/personal startup experiences support this strongly and I read enough on here to know that I am not alone…
What's the balance between being specific in a way that's positive and allows you to solve good problems, and not getting pigeonhold and not being able to pivot? I wonder if companies who pivot are the norm or if you just here of the most popular cases.
Really valid points. I agree with the bits about “expertise in getting the computer to do what you want” being the way of the future, but he also raises really valid points about people having strong domain knowledge (a la his colleague with extensive art history knowledge being better at midjourney than him) after saying it’s okay to tell people to just let the LLM write code for you and learn to code that way. I am having a hard time with the contradictions, maybe it’s me. Not meaning to rag on Dr. Ng, just further the conversation. (Which is super interesting to me.)
EDIT: rereading and realizing I think what resonates most is we are in agreement about the antithetical aspects of the talk. I think this is the crux of the issue.
reply