Hacker Newsnew | past | comments | ask | show | jobs | submit | 0xmarcin's commentslogin

I highly doubt it, Windows is known for its stellar backward compatibility. Code signing means a lot of older software, that is still in use, would not be able to install or run. This is not going to happen (at least in the enterprise).


I have mad respect for Microsoft engineers for the compatibility work that they've done over the past decades. It is indeed superb that you can take even today an old Win32 executable and run it and it'll just work.

But I expect the new leadership will not put much value on this. I imagine it'd play out that first to "to enhance the security and improve the UX" they'll start a shoving a bunch of nagging dialogs in the users face "this app is not safe" etc.

Then they'll add a flag to enable "unsafe mode" where the user can run unverified / unsigned code.

Then finally they'll just nuke the flag.

After all requiring that the ecosystem with the most "important" apps such as their own office suite, slack, adobe etc. grind out new versions with digital signing is not out of alignment with these companies incentives and development cycles either.

In fact I would not find it surprising if these companies would actually be approached by Microsoft to participate in any such scheme and get offered some kind of "discount" or reward (whether it's app store discount or whatever else) and these companies would only see it strengthening their own moats against any possible competition.

And I'm talking about the consumer use case, not the corporate.


You don't know how many ad-hoc legacy apps based on Java/C# are out there. Zillions. If you want to give GNU/Linux a huge chunk of share (Java and C# code from early 00's/2010 will run everywhere), MS would face a huge disaster and billions of loses.


Users value backwards compatability. Users aren't the customers anymore and don't drive KPIs.


> Windows is known for its stellar backward compatibility.

was


They can just sandbox old applications, like they did with DOS ones.


Check left menu for more details.

Looks like a cool project. We get a computer with VGA output, FDD and RS232 ports.


- You can still learn from the best of the best, its called books, try to read at least one per month.

- Conferences, not my thing, but if you are new, you may learn a trick or two there. No just go there, try talking to the people. Approach a few senior looking guys and ask them for advice.

- Confs can be quite expensive, a cheaper alternative is local user groups. You can try to find the closest ones via Meetup.

- ChatGPT (yup, hallucinates but still has a reasonable answer for 90% of the questions).

As for your situation:

- You are responsible for hiring. Try to hire people with more experience than you.

- "to the highest standards of the industry" isn't that perfectionism? Most production code has much much lower quality than what we see in open source.

- I think for the time being (the job market is really bad right now), just concentrate on delivering stuff. Learn if you have some time left, but delivering solutions should be your prio #1 in my opinion.


Blogs and small sites still show up when you look for obscure contents like "RS232 DTR line". So far when I had a very specific question related to hardware or software I could find it via Google.

I find that blogs and small sites do not have a chance when looking for a commercial products or when trying to find a review for a product. There is too much SEO spam and fighting for the top positions.

But if you are doing something that cannot be commercialised easily or is very niche your blog will have easy time on Google (programming is not a niche anymore).


Sometimes small sites show up. Sometimes nothing shows up even though there are matching small sites out there.


Yeah, if you're specific about it and know what to expect it's usually workable. In any case, this blog post is an indicator of what's about to come next.


My bet would be to write a small program using your target language. With ChatGPT this is extremely easy as you will get a list of recommended libraries to use. Try to choose an app that requires some serious coding e.g. creating an image board like 4chan is better than coding a tic-tac-toe.

My recommendation is that the learning app should:

- Interact with an SQL database

- Expose an HTTP endpoint (REST or GraphQL)

- Use a logging framework

- Use concurrency

- Use a unit testing framework and a few integration tests

- Build should be automated using GitHub actions

In my opinion that's the fastest way to learn a language or more broadly a platform (as every language now is a kind of platform with its own set of libraries, conventions, idioms and untold rules).

PS. My list is probably not good for a system language like Rust or C++, but should work for languages from Ruby & Python, though Java & C#, up to Go and Erlang.


Ad Scala - it depends if you already knew a JVM based language. If you are coming from Java you will have a head start: use Maven/Gradle instead of SBT, you will already know IJ, you will know your way around managing JVM versions and understand core concepts like jar files and class loading.

For me the most time intensive part of learning a new language is learning the new libraries. I don't mean here the standard library but rather things like library for interacting with SQL database, how mocking works in the new language, how concurrency works, web API framework.

Learning "just" a language can be done over the weekend. Learning to properly use its standard lib and the entire environment of libraries may take years.


The best way to learn a language is to build a project with it. Anything else is just exploring its feature. The trickier issues are always features integrations.


Let's hope it won't be Oracle.

ehm... jokes aside. I think a more reasonable way is to setup a foundation, composed of biggest players in tech, also companies like Google, Meta, Microsoft, Mozilla Foundation, Linux Foundation, Apple and EFF. The foundation should steer the further development of Chrome. In that way Chrome will be owned by community just like e.g. Linux Kernel or standards like C++ lang spec.

If Chrome would be bought by a private entity, that entity would probably start milking the current user base straight away. Expect adds in bookmarks bar, more address bar spyware (e.g. sending all phrases to the cloud) and paid extensions web store.

The most used and advanced browser that we have today must stay open source. It is more than a program, it is part of global internet infrastructure. We should not destroy it by a foolish political decision.


Oracle was the front-runner for buying TikTok last time they were under pressure to sell: https://www.bbc.com/news/technology-54148474

They were the first company I thought of.


I doubt Mozilla would like to be part of a foundation owning another browser.


This is not my domain so my knowledge is limited, but I wonder if the chip designers have some sort of a standard library of ready to use components. Do you have to design e.g. ALU every time you design a new CPU or is there some standard component to use? I think having a proven components that can be glued on a higher level may be the key to productivity here.

Returning to LLMs. I think the problem here may be that there is simply not enough learning material for LLM. Verilog comparing to C is a niche with little documentation and even less open source code. If open hw were more popular I think LLMs could learn to write better Verilog code. Maybe the key is to persuade hardware companies to share their closed source code to teach LLM for the industry benefit?


There are component libraries, though they're usually much lower level than an ALU. For example Synopsys Designware:

https://www.synopsys.com/dw/buildingblock.php


Or learning through self-play. Chip design sounds like an area where (this would be hard!) a sufficiently powerful simulator and/or FPGA could allow reinforcement learning to work.

Current LLMs can’t do it, but the assumption that that’s what YC meant seems wildly premature.


The most common thing you see shared is something called IP which does mean intellectual property, but in this context you can think of it like buying ICs that you integrate into your design (ie you wire them up). You can also get Verilog, but that is usually used for verification instead of taping out the peripheral. This is because the company you buy the IP from will tape out the design for a specific node in order to guarantee the specifications. Examples of this would be everything from arm cores to uart and spi controllers as well as pretty much anything you could buy as a standalone IC.


I am a bit concerned here. I wonder how much time will pass before someone decide to use it to hack a computer?


IIRC there have already been proof of concept attacks made using MicroSD cards with the microcontroller modified https://www.welivesecurity.com/2014/01/02/could-new-malware-...


This is likely an extremely rich attack vector if you can gain any reach through the SDIO interface.

That’s a big if… but because of the relative obscurity of the attack surface and requirements for unusual tools, this is probably largely unexplored territory for non-state actors.

It is very likely that the firmware and drivers for SDIO are at the very least insecure and likely rife with serious arbitrary-code-execution level bugs, manufacturer / letter agency back doors for special tools, and similar attack surfaces that will suddenly become accessible to anyone with a hundred dollars and the desire to dig in.

Ultimately, this will be good for device security, but the need for a specialized (but obtainable) tool to execute the attack means probably years of vulnerabilities in the wild, and won’t-fix for older devices.


I honestly can’t imagine why someone would downvote that lol.

Sdio is exactly the kind of interface that one would use for hidden backdoors, since you need a very special piece of hardware to deliver the payload.

No one will ever discover that there are undocumented features that can be accessed by a nonstandard sdio device with just the right mis-timings… because the only thing ever going in that a lot is a memory card that is incapable of producing that signal.

At least until now lol.


Maybe there were in the past, currently there is entire industry there to help you game the system.

- Cracking the coding interview.

- Elements of programming interviews in Java|Python|whatever...

- leetcode & other sides with paid premium subscriptions...

- mock interview bootcamps...

It's no longer about skill, it's only about gaming the system.


There are physics textbooks and YouTube videos everywhere and yet we aren't all physics experts. Existence of knowledge and accessibility of information does not guarantee everyone can learn to do something and it especially doesn't guarantee that everyone can learn to do something well.

LLMs are another great example


Typical table stakes prep work for a high paying job with a lot of competition.

I’m serious. It’s not even guaranteed you’ll get the job if you do the above but everyone who passed the big tech interviews will acknowledge they fucking studied for it. What do expect here?


But who is talking about big tech? Aren't we talking about just most companies here? I have had some interviews and every single one of them had broken hiring processes, silly reviewers, who get stuck up at things like you not using their favorite code autoformatter, for code that is 1 screen long, or did the ghost job thing, where they apparently did not actually want to hire anyone.


The ease with which interviews allow the unscrupulous to inflate their abilities has been well-known for long enough for it to be metagamed and exploited for profit and now the effects have become abundantly apparent. Imposters institutionally infested in this manner across industries have reached a critical threshold where their fundamental lack of competency can no longer remain hidden.


Have you got the high paying big tech job then?

If you do the above prep work and pass the multiple interview loops you’re pretty smart honestly. The vast majority couldn’t do it even with all the study possible and even the smartest couldn't do it without any study at all. The bar is pretty high.


What do you mean “gaming the system”?

The companies ask that candidates learn how to solve algorithm problems and the candidates do it.

I would call it “everyone playing a game they agreed to play by the rules they agreed to play with.”

And I don’t know what you mean by “it’s no longer about skill.” It still takes a lot of skill to be able to solve hard algorithm problems, even if you took a course on how to solve them and practiced solving them for 6 months.

When you audition for an orchestra they give you the sheet music ahead of time. It doesn’t mean you have no skill if you practice first instead of just showing up to sight read the music.


Tech interview preparation mostly boils down to rote memorization not really what I would call “developing a skill”. You just cram enough until you can pattern match any kind of DSA or system design problem and apply the solution you memorized. Once you finally land the job, you’re free to forget everything. Then you begin to develop “real skill” on the job.


This


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: