> So the same thing run locally is run in the cloud
Who is preparing Dockerfiles? Developers and system administrators / security people do not generally prioritize same things. We do not use k8s for now (therefore I know very little about it), so this might not be relevant but how do you prevent shipping insecure containers?
Generally developers. When running in a container most of the attack surface is the app itself, and if it is compromised the damage is supposed to be limited to the container. There have been container escape exploits in the past though. But with a container you treat the container as the thing that you run and give resources to and don't trust it just like if you were running an application. All of the principles of giving an application resources such as least privilege apply to containers too.
But since you are not running multiple things or users in one space in a container, something such as an out of date vulnerable library can't be leveraged to gain root access to an entire host running other sensitive things too.
In Kubernetes and docker in general one container should not be able to compromise another, or k8s. But there are other issues if an attacker can access a running container such as now having network access to other services like databases. But again these are all things that can be locked down and should be even if provisioning hosts running things.
I always assumed that Whatsapp keeps the logs of call / messaging (i.e who texted to whom, how many times etc), however according to the this document, it does not keep them. Moreover, it does not share neither group information nor contacts. If this is really the case, it is quite better than most people think. I just wonder, though, why do they need to keep EU out of this latest ToS update and what does Whatsapp share with Facebook?
If one part of the application needs to be changed frequently and/or is used by other applications, then it makes sense to separate it as a microservice. However, for other parts, I think it might be better to stay as a monolith since when you have a network in your calls, things get messier.
Networks are unreliable even if it is your local area network. In my career I saw some unrelated network equipment was sending a RST packet to load balancer with application servers IP address and resulting in service unavaiable errors. It took us weeks to find that out since it was happening very rarely and there were many layers between load balancer and application servers.
Network means slowness, connection errors, unknown results, etc all of which should be handled carefully.
Not sure if rate of change truly matters. In my opinion, microservices make most sense when dealing with multiple team. It's much easier to update core services when everyone depends on it through the network, rather than requiring people to redeploy their stuff. If you're a single team operation, it's not really a problem if all your stuff runs in a single process. It may even benefit you in terms of performance and scalability, to be honest.
Rate of change may matter when your monolith application takes too long to deploy (i.e a big war application with lots of initializations at start), although rolling updates may help here. It will also be easier to rollback if things go wrong.
On the other hand, I agree with your point of multiple teams. I tried to mean same thing with "used by other applications"
How long are you talking? I can't think of a process I've ever worked on that took more than a few seconds to start. As you say, if you have any sort of rolling deployment mechanism the start time of your application is somewhat irrelevant. Unless your application takes a massive time to start, it seems like the wrong thing to optimise for. Surely you only deploy things a few times a day, if that, and hopefully most deployments don't require rollbacks.
Also, with the current network speeds and memory availability, it's hard to imagine the war size to be a bottleneck in web development.
Depending on other services via a well defined network interface is good for decoupling teams. So it depends on the number of employees you have coding for the same platform, rather than on clients or anything like that.
> How long are you talking?
There were some Spring applications that took more than 5 minutes to start in my old job. However, things are most probably improved now with lazy initializations of beans and maybe developers of that applications could have done better at that time. I don't know it is still the case now.
> Also, with the current network speeds and memory availability, it's hard to imagine the war size to be a bottleneck in web development.
Network speed was not the problem, decompression / expansion of the application was. There were war files about 100-150 MB size. Also I saw one with size of 1.5GB however it was an exception. These may or may not be a concern depending on your CPU utilization, disk speed, memory, etc.
Most decisions depends on environment. For some environments, as you said, deployments are not a concern for microservices, for some they are.
If networking is not a problem, why not deploy it already decompressed? I'm assuming the 5 minutes time included decompressing it? It seems a bit weird to take so much time to allocate some objects in memory and possibly establish a few connections.
Anyway, can you share which kind of environment even a 5 minute deployment would be a problem? Most places I worked at it took a lot more than 5 minutes to actually implement the change that's going to production, so deployment times were pretty much irrelevant overall.
OP here. The move is definitely needed in our case as it's to the point where IntelliJ won't even load anymore. But I had to laugh when improved testability was brought up as one of the benefits.
Are mRNA vaccines safer than traditional ones? In the long run, which one will have less side effects? As far as I know, Both US and Europe are only going with mRNA vaccines for COVID. Is it intentional or just because there is no traditional vaccine that completed Phase III trials?
Professor Shane Crotty, PhD answers a series of COVID 19 vaccine questions including what are the chances of long-term side effects? How safe is RNA vaccine (i.e. Pfizer / BioNTech an Moderna Vaccines) technology? How long does mRNA from a vaccine stay in our cells? What else goes in vaccines? How long does immunity last? Why are T-Cells so important? Why does Pfizer's vaccine need to stay SO cold?
I looked the video and he says that based on vaccine literature, for the past any important vaccine safety signature was clear within two months meaning two months is enough to conclude whether a vaccine is safe or not. However, I think his answer seems a little bit superficial. Isn't the most part of the vaccine literature based on traditional vaccines? Yes I know that mRNA is not that much new but is literature on long term side effects of mRNA vaccines extensive?
Seems like people prefer to downvote rather than try to give a compelling answer to the question "is the literature on long term side effects of mRNA vaccines extensive".
If the argument towards this question is overwhelming, surely someone has made a lecture on it somewhere.
> Seems like people prefer to downvote rather than try to give a compelling answer to the question "is the literature on long term side effects of mRNA vaccines extensive".
I think it might be because people are confusing asking questions with being against something. I am neither against vaccines nor mRNA vaccines particularly. I just genuinely want to learn the details. I think it is important to know upsides and downsides of each vaccine out there since there are several alternatives.
I have noticed this sleight of hand being used in a few different mRNA explainers. The two month window which has usually been sufficient to establish the safety of traditional vaccines isn't really applicable here, unless we have some reason to believe this new type of vaccine will present side effects and complications in the same way over the long term. But we don't have a good reason to believe anything about them in the long term because it has never been studied.
The mechanism of action is actually a good reason to believe lots of things about them long term.
The mRNA is delivered, causes cells to express proteins and is naturally broken down (it's normal for the body to sweep up mRNAs over time).
Once the mRNA is gone, the long term effect is the immunity, which there's no reason to expect it to be different than immunity to similar antigens, which is something that is well studied.
So the immunity side of it is not a radical departure (antigens are presented to the body) and the delivery/mechanism is quite clear and has a natural brake (mRNA degradation).
The latter. Many countries have bought a mix of both, the advantage of mRNA tech was always its adaptability hence why these are coming out first. Oxford vaccine is reported likely to be approved for use in UK in less than two weeks’ time, for roll out first week of Jan
Github's search is inferior compared to google's site search for github (i.e keyword site:github.com).
I don't know why but Github search almost useless if the thing you are searching for is not a exact match. For example I tried just one random search.
Search keyword: HashMapBoxing
https://github.com/google/guava/search?q=HashMapBoxing
I don't remember the exact project or where I read it but one project added delay (using sleep function) to depreatected methods in order to discourage people using them. After every release, they increased the sleep time so that at one point it became impossible to use it without noticing it. Although I don't know if it is possible to do this for new code trying to use deprecated function and have old code use non-delayed one, I think it was a nice trick.
Slightly tangent, this reminds me of the story of the game developer that would allocate a few tens or hundreds of megs of RAM early on in the project and not touch it. He knew that as the project was nearing completion, they would get squeezed on available RAM. Then he could just go and delete a single line of code and, bam, tons of free RAM.
This seems like a slight variant of SDD: dickhead-driven-development. Clever, but your coworkers still hate your guts.
There's a real reason to do something similar if you're developing for a memory constrained devices with a API someone else will use.
If you have objects your API functions return that API consumers will use, you want to make sure the objects don't suddenly grow in size, since that would eat into the memory that the consumers expect to use. So, you'd pad those objects with unused fields, and then if you need to add a new field, you can just use that padding area.
Something similar applies if you're sending objects over the wire, since you may not want to increase the size of the message later.
I knew someone who did this with humans at a bigcorp.
He’d politick to transfer in new teams for the sole purpose of building a supply of cannon fodder. Staff up in good times, and purge fodder when layoff targets came to protect “his people”.
Back in 2000 we were mandated with a big push to eliminate all shared excel documents and turn them into real database driven products. There was this one department that had a huge excel database that was bringing the network to its knees. Around that time I had discovered that you could create a function in excel with the moniker of a null character (alt-255). We had used that for playing pranks on one another. Someone had the bright idea to put a function into the code that invoked a slowly increasing pause. That function was sprinkled all throughout their code and no one knew because you can’t see a null character.
A few months later that department was practically begging us to convert their excel document into a database project.
I read this, and thought to myself that a hacker would love to find that in code through analysis tools. They could then replace the delay with malicious code, and no one would be the wiser for quite some time.
Doing stuff like this seems creative and awesome at the time, but it breeds vulnerabilities something fierce. It also creates a nightmare for maintenance.
I would suggest a different approach of figuring out how much that Excel doc was costing the company every month, how much the company would save if the doc was converted to a real data service and Web front end, and then present the comparison at a meeting with management from that department - give them a chance to sign on before you take it to more senior management.
In this scenario I would argue that deleting a function abruptly without verifying impacts is significantly worse than gradually adding a delay. Usually when function deprecations occur impacted dev teams are notified on a shared distro of what's coming in future releases. Library maintainers at large companies typically don't have the bandwidth to investigate each project that uses their library to determine if deleting a function would detrimentally impact a production environment. What a deletion does do is block the downstream dev team in this hypothetical from deploying to prod as they investigate why their builds are suddenly failing (or crashing in prod!) and refactor an alternative solution. Do you really want to work in an environment where other devs feel empowered to break your builds and sprints ad hoc?
I assume they are welcome to use the old version, and this comes after an ignored deprecation warning. If not, then this deletion should come after a forced migration by whatever library team enforces a one version rule.
That broken build won't be pushed to production and affect users, but invisibly slowing down a production service and hoping someone notices via monitoring will probably pass tests, build fine and be pushed to production where it will affect end users. I absolutely want to work in an environment where the build fails rather than people play childish tricks to punish my users because I used a deprecated function, and hope I notice.
While an interesting tactic, I don't think it's a great idea as it may mean a poorer experience for the customer rather than just the development team.
100% evil and pretentious. Not every dev team has the free time to go rewriting everything to the whimsy of library devs that can't help themselves from incessantly shuffling everything around every week. This is how you encourage people to never update and keep security vulnerabilities in the wild.
It is especially evil since on many systems old methods are more performant, because they were meant to be used on old limited hardware (where cycles and memory really mattered).
So by making those slower... you 're really digging a hole.
There are of course cases where the opposite is true too, like all things in life.
We're talking about transitions where the library authors deliberately want to (and have good reasons) to migrate whole organization over. It's not they who are pretentious in your scenario ;)
I think that really depends. If there are performance, compliance, or maintenance requirements for the new versions where this is being implemented, and devs are updating to, then it could be considered reasonable. You wouldn't be seeing these delays if you continue using old versions of the library. Freeze your dependencies if you don't want your dependencies to change.
This is a failure of the dev team not evaluating that libraries are stable enough for their organization. If you can't keep up with the iterations of the libraries you're using then you shouldn't be using those libraries in the first place.
AFAIK some of the breakthroughs in the delivery mechanisms are relatively new. Some of the companies now making a covid vaccine were talking about running a trial for a flu vaccine in the next few years, but obviously that was far less urgent and all the steps (getting permission, getting participants, actually running the study without "unlimited" money, ...) would take longer.
(mRNA would be a good candidate for a flu vaccines because they are fast to produce, which in theory reduces the amount of guessing needed to make the "right" flu vaccine for the year, but of course a) flu vaccines already are a thing even if sometimes wrong and b) a flu vaccine isn't given as high priority overall)
Previous submission is just 3 months ago and points to newer version of the paper. Obviously I should have searched by title before submitting.
Thanks Dang
Who is preparing Dockerfiles? Developers and system administrators / security people do not generally prioritize same things. We do not use k8s for now (therefore I know very little about it), so this might not be relevant but how do you prevent shipping insecure containers?