Hacker Newsnew | past | comments | ask | show | jobs | submit | robochat's commentslogin

I remember being amazed when I learnt about summing arithmetric series.


They took a multiplier of 5x (4 indirect deaths for every direct death) and stated that this was conservative given studies of previous conflicts.


"Valuable Humans in Transit and Other Stories" by Qntm has some good (harrowing) stories about human uploads too.


There is also the Earthquake Network (EQN) app that works on very similar principals to Google’s system - phones monitor their MEMs accelerometers, when they are left charging with their screen off, and when enough neighbouring phones detect vibrations simultaneously an earthquake is detected and apps nearby are alerted. It’s been running since 2012.

[1] https://sismo.app


The USGS created a system to do exactly this about 15 years ago. I’m not sure whether they’re still running it but at the EMSC, we've been running a similar system for many years to highlight earthquakes important to the public and improve our messaging. Twitter doesn’t give access to geotags anymore but we do manage to roughly estimate an earthquake’s location by analysis of the tweets. Estimating magnitude is much more difficult. Naturally there are some false positives but it works well overall.

[1] https://www.usgs.gov/media/audio/shaking-and-tweeting-usgs-t...


We actually use the twitter detections to launch analyses of the seismic data in order to get confirmed results for events that aren’t reported yet [1] but there are some statistics for the twitter detections in the supplementary material of that article [2]. Basically, in 2016-2017 (wow, so long ago), we detected 893 earthquakes via twitter, with a median delay of 67s and a median separation from the published epicentre of 94km. Note that estimating earthquake epicentres is nontrivial anyway and so, for comparison, 10km accuracy would often be considered ok. So the twitter, I mean X, method isn’t optimal but it gets you down to the right region. Partly it’s because geocoding the tweets is inaccurate and partly it’s because people live clumped together in cities rather than smoothly spread over the surface of the earth.

[1] https://www.science.org/doi/10.1126/sciadv.aau9824 [2] https://www.science.org/action/downloadSupplement?doi=10.112...


I have used Thunderbird for years and my work account has to handle large numbers of emails. In general it is fast for me on both Windows and Linux. I have had some issues over the years but they normally turned out to be due to corrupt mail folders.


Can anyone explain the maths that he did because my calculation give different results. Isn't there only a 1 in 2588 chance of a student 'guessing' 19 'answers' out of 45 where there are 5 options for each answer? Whereas the article states that it is 1 in 100 ? (actually he writes 1:100 which is odds, so it is 1 in 101?) Don't we just use the Binomial distribution, so: Prob = 45!/(19!26!) * (1/5)*19 * (1 - 1/5)*26 It's really bugging me that I can't get the maths to work.


Actually, I can't seem to get the maths to work. Isn't it just the Binomial distribution? Each question has 5 options so the probability of 'success' is 1/5 and so to get 19 questions 'specifically wrong' out of the 45 planted questions by chance is just (1/5)*19 * (1-1/5)*26 * 45!/(19!26!) = 1 in 2588 but in the article it is 1:100. What am I doing wrong ?


Here's my best guess: 19 is the cutoff point for a binomial test [1] where the probability of at least that many answers to match those in the honeypot test goes below 0.01. But this holds only if you assume p=0.25.

Why would you use 0.25 instead of 0.2? I guess it would make sense if you only looked at wrong answers - that is, you wouldn't be asking "what's the probability of your answers matching those on the fake test" but rather "what's the probability of your wrong answers being wrong because you used the fake test". Since you are only looking at wrong answers, your probability is 1 in 4 instead of 1 in 5.

[1] At least, according to this calculator: https://www.socscistatistics.com/tests/binomial/default2.asp...


This feels harsh because if the tests were similar but with small differences then getting the right wrong answers seems likely to happen anyway. Also, he went the extra step and outright failed the student but I'm not sure why that extra step was justified, surely the cheaters were automatically going to get very poor marks.


It was an online course and the exam was online too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: