I'd expect people who are affect to be more likely to vote "no", out of a desire to not use (or appear to use) the disaster to their advantage. They want a level playing field, and that means to them not getting special treatment because of outside circumstances.
I'd expect a lot of people not affected to vote "yes" out of a desire to not gain (or appear to gain) at the expense of other's suffering a disaster. They want a level playing field, and that means to them not having the competition disadvantaged because of outside circumstances.
It's the same reason Inigo Montoya let the Man in Black rest after the latter's climb up the Cliffs of Insanity, even though the Man in Black was willing to fight as soon as he got to the top.
What cheap lies? I'm well aware of how old that design is.
If copying is wrong it is wrong regardless of how long ago the original was made. Or is there some magical cut-off date by which copying suddenly is ok? Why 14 years? Why not 13 or 10 or 50? It strikes me as pretty arbitrary. For a company to go all out in accusing others of copying I think they should be above all that and come up with entirely original designs. Why take a 30 year old tape recorder and mimic that, is that really the mark of originality that Apple stands for? It seems quite hypocritical to me.
It’s most certainly not hypocritical to believe that there should be time limited monopolies on designs and copying designs for which that monopoly has run out. Simple as that.
Don’t argue over ridiculous stuff like that. There is no need for these cheap polemics.
That's never been the philosophical basis of copyright. Copyright is a limited, artificial monopoly designed to encourage creation. It's not that copying is "bad", it's that limiting copying for a short time might encourage people to create new works.
Thomas Jefferson articulates this reasoning, talking about patents:
>If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from anybody. -- http://press-pubs.uchicago.edu/founders/documents/a1_8_8s12....
1. That Braun thing was produced decades ago, and probably they haven't sold a single one of them in the last decade or so.
2. A podcasts app on an iPad, is in no way competing to an ancient cassette player.
3. They're in different categories. It's like if I "copy" a Mercedes logo for a window. Though I'm not particularly in favor of Apple/Samsung case ruling, it's clearly different to copy an element for a competing product (a tablet) or another product that's a whole different beast and is no longer for sale.
There is a world of difference between being influenced by something and doing an exact copy. One is legal, accepted as beneficial to society and common place within the design community. The other isn't.
You are either woefully naive or being disingenuous to assume that the two are the same.
I could matter. Companies that have highly valuable patents can create what are called "patent portfolios" which take some patents, layer them with other patents and use the legal trick of "continuation" to effectively get patent coverage for portfolio as a whole by constantly developing newer, tightly-coupled patents.
Braun, if they wished, could have done so. Of course, I'm sure in that case, Apple would have probably licensed the patents or worked around them.
Isn't cutting off at some confidence level on the 'do not do this' list for A/B testing? So far I understood that you test until you reach a predefined number of conversions, and then you check your significance to see if the result you obtained is valid. Not until you 'hit confidence'.
That's a blog post that I should get around to writing a rebuttal to some day. Because it is widely quoted and off base.
In theoretical frequentist math world, it is correct. If you peek repeatedly, eventually you'll come to confidence when there is no difference. Back in the real world, it is perfectly acceptable to use a strategy like, "We'll set a really high confidence (eg 99.5%) for cut-off until we get to a couple of thousand successes, and then we'll drop our standards substantially (eg 95% cut-off). If we are forced to stop for business reasons, we'll choose whatever happens to be ahead at the moment."
And yes, I can use Bayesian statistics to demonstrate that following the strategy that I describe creates acceptably low probabilities of making somewhat wrong business decisions, while allowing you to make good business decisions more quickly. And in practice people can follow it without needing a strong statistical background. (If I did enough work I could come up with a sophisticated optimal curve to use in making decisions. But I have not done that work, and in practice explaining it would be more work than it is worth.)
Why is this? Two reasons.
The first is that you only really get "independent peeks" at different orders of magnitude of data. Thus if you wait until you're past a small amount of data, you don't get a strong "repeated looks" effect.
Secondly coming to the wrong decision only matters to a business when the chosen option is substantially worse. If you follow a rule like what I gave, your odds of accidentally making up your mind in the wrong way if there is a business-significant difference are surprisingly low. For instance if you would detect a 2% difference as significant, and there is a real underlying difference of 1%, the odds that you're making the right decision right now deciding at a 95% confidence level is 99.2%. And if the real difference is a 0.5% win, your odds of making the right decision right now is 91.5%. (This despite the fact that you'd expect to need 16x as much data in order to even have a good chance of detecting a 0.5% win!)
Thus the decisions that you're making are usually correct. And on the occasions where you make the wrong choice, the mistake is usually not materially worse.
Looks to me as if you're on the road to building a company the hard way, but if you persist I think you just might get there. Taking a few wrong turns doesn't need to be the end and you seem to have learned a number of valuable lessons.
IBM is fantastic at using their research for PR, but it usually takes a very long time until these breakthrough announcements show up in products. The challenges on the road to competing with established tech for a completely new technology like this are formidable.
An announcement like this is great but it is no reason to get overly excited just yet. Many other technologies have been proposed and have either disappeared or have found employment in niches (GaAs for instance), for now Silicium still holds a formidable edge when it comes to the most important measure of all: economy.
This particular breakthrough product probably wouldn't start appearing in chips for at least 8 or 10 years, though. It's fine to get excited! Just don't think we're getting nanotubes in our phones next year. Changing materials is a huuuge step.
Even then it will have to make economic sense to switch. After all the equation is not x > y but $n using tech x buys more computing power than $n using tech y.
Storing data and using data are two completely different things. You can store all of FB and lots of other companies in a fairly small volume these days. But as soon as you want to access that data to process or update it the game changes rapidly. Suddenly that one cabinet explodes into a datacenter full of cabinets, or even several data centers.
Storage is a solved problem for just about any amount that an ordinary company might need. Getting that data delivered to a CPU at speeds that are still usable in a practical sense if you want to say something about all of that data is a completely unrelated problem which changes amount of technology and funds required from the easy level to the extremely hard and beyond level.
"Storing data and using data are two completely different things."
That is so true. When people ask "How hard can it really be to write a search engine these days?" I have been known to ask them to speculate on how they might go about it and then point out the challenges of knowing what data you have vs what data is asked for. Search is particularly interesting because the more time you spend the better your answer can be, and its always challenging to 'draw the line' between fast and relevant. But that is also what makes it so fun :-)
The correct term is internal resistance, and you can calculate it easily using ohms laws. The internal resistance is easiest to think of as a series resistor in line with one of the output terminals.
No, but it is inefficient. A larger charger will have a larger 'dead load', the internal consumption of the charger. So if you plug in a very small consumer into a such a charger then percentage wise you're losing a lot of energy to heat.
Author here - it's a bit more complicated than that. For a linear power supply (old-fashioned wall wart), all the unused power is converted to heat, so you'd waste a lot of energy. But for switching power supplies (such as USB chargers), theoretically only the power that is needed is used, so the efficiency shouldn't depend much on the load. In practice, larger chargers might have better overall efficiency since they can implement better circuits (for space and price reasons). But larger chargers might not optimize as much for low loads. So it's hard to say offhand whether a large charger or small charger would be more efficient for a smaller load.
I did a quick test to see what really happens, plugging a Samsung phone into a iPhone charger and an iPad charger. In both cases, the charger used 3.0 watts of wall power. (The phone was turned off and charging since if it is turned on, the load fluctuates a whole lot as the phone does random things.) So my conclusion is that the size of the charger doesn't affect efficiency.
For the general case, all other parameters being equal (supply mode, quality and so on): bigger charger -> larger dead load.
An iPad charger is not a 'large' charger, it's a fairly small step up from an iPhone charger, since you are reporting 3.0 watts of 'wall power' but your multiplication of scope measured values does not correct for power factor you are likely off by quite a bit on both measurements.
GP mentioned a HP touchpad charger to charge a phone, I don't have a HP touch pad charger here but the specs are quite terrible [1], you'd have to measure with that specific charger to answer the specific question or you'd have to do a comparison of a large range of chargers with accurate measurement methodology in order to really answer the general question.
As it is your conclusion contradicts practical engineering and I'm afraid it will not hold up in a better test, which would be to try a number of switched mode supplies of various sizes designs with various loads. Plugging in one device and doing a hasty (wrong, ignoring phase shift) measurement does not warrant your conclusion.
To measure efficiency you're going to have to take the power factor[2] into account, this can be quite hard to do, and theoretical efficiency doesn't matter for a practical test (you're measuring, not theorizing).
The wave forms that switched mode chargers [3] output and consequently the kind of load they represent to the grid is so irregular that most non-caloric and power factor corrected measurements will give values that are not accurate. That noise that is present on the output wires will be to some extent visible on the input side.
A normal Watt meter will work best with transformer based supplies or resistive loads, accuracy for small switched mode loads will be anywhere from 'so so' to 'terrible' depending on the make and model power meter. Good brands (for instance Fluke) do most calculations right and will be able to deal with CFLs and other phase shifted loads, bad brands (I won't name them but they're killing it in the domestic watt meter department) will give wildly in-accurate results.
But even a quality meter like a Fluke will still have trouble with this kind of spiky load, especially if it is small.
It would probably be a good idea to (properly) describe your test rig along with the results it says:
"I measured the AC input voltage and current with an oscilloscope. The oscilloscope's math functions multiplied the voltage and current at each instant to compute the instantaneous power, and then computed the average power over time. For safety and to avoid vaporizing the oscilloscope I used an isolation transformer. My measurements are fairly close to Apple's[15], which is reassuring. "
But you can't really do it that way and get accurate results, instantaneous power draw using a switching supply changes several hundred thousand times per second and is likely phase-shifted so a simple multiplication is not going to work.
Accurately measuring (low) power draw from switched mode consumers is a really tricky problem, it's easy enough to read some numbers from a display but I can assure you that this is not a simple problem to work on if you want to get meaningful results.
Thank you for your detailed comment. I went to a great deal of effort in my article to measure the power consumption accurately, accounting for the power factor, but I left out most of the details since most people don't care. I'm not multiplying the average current and average voltage to compute watts, because that obviously would not work due to the power factor. Instead, I'm multiplying the instantaneous voltage and current 50,000 times a second and summing this up, which gives the actual power, corrected for the power factor. (While the internal current changes tens of thousands of times a second, the line current changes slowly due to the input filtering, so this is plenty of resolution. I'm using a Tektronix TDS5104B 1 GHz oscilloscope, so I have a pretty accurate view of the input voltage and current.)
The main sources of error in my measurements are the cheap isolation transformer (which causes a bit of line voltage distortion under load), the current sense resistor, the tolerances of the voltage divider resistors, and noise in the measurements. So I wouldn't claim these measurements to be better than 10%.
You can take a look at one of the oscilloscope power graphs at https://picasaweb.google.com/lh/photo/pbrO8BQz38kDo9xU5ejffd...
Yellow is the input voltage, and turquoise is the input current. The non-sinusoidal current shows the non-unity power factor. Note that there's no phase shift, but instead the current flow happens only at the voltage peaks (which is a consequence of the input diode bridge, not of the switching power supply per se.) At the bottom of the image is the instantaneous power, computed from the instantaneous voltage and current.
For the iPad vs iPod measurement above, I didn't have the oscilloscope handy so I used a Kill-A-Watt, which does in fact take the power factor into account.
Going back to your statement that "bigger charger -> larger dead load". By "dead load", do you mean the power consumption under no load, which I call "vampire power" in the article? This varies widely between chargers, having more to do with the design than the size of the charger. But in any case, this wasted power is pretty much irrelevant under load. For instance, 100 mW is a typical vampire power usage. So if a hypothetical larger charger has twice that wasted power, at a 3 watt load, this is only a 3% difference.
The peaks in your scope image clearly show a phase shift, which is kind of logical if you take into account the fact that the main component in a switched mode supply is a coil.
If you look a little more carefully at your scope trace you'll see the coils reactance at work in the lower trace, the peak is where the FET in the supply is closed and is drawing real power, the purple trace past the peak and beyond the 0 crossing is inverted and drops slowly back to 0 before the next peak hits. If you use the controls on your scope to zoom in on the bottom trace by increasing the vertical sensitivity you'll get a much better idea of what I'm getting at here. You'll see '0' voltage and yet current is still flowing.
You can't correct for power factor by simply increasing the resolution and averaging. The base frequency of your oscilloscope does not enter into the discussion here, it could be 500 Hz for all I care and that would be enough.
Furthermore, the power factor of a switched mode supply changes as a function of the load applied and gets (much) worse if that load is also reactive or capacitive. Under some circumstances it is possible to draw negative power from the wall socket if you do a naive measurement, or you'll see wall socket power decrease as output current increases.
All this is possible because voltage and current are more or less out of phase with each other.
The kill-a-watt will work well with some reactive loads (such as CFLs) as long as they're of the ballast type.
A switched mode supply presents challenges that can't be met at the cost constraints of a consumer device like that.
Vampire power is a new term, I'm not familiar with it. Dead load (or simply the losses) is anything that does not end up in your consumer (the live load), I'm not sure if that is an accurate translation of the terms. It normally goes up as a function of the amount of power consumed, the base line (consumption without any load at all) is probably your 'vampire power'.
Total efficiency is 100 * ((output power)/(input power)) and will in practice be anywhere from 60% to 98% depending on how well load and supply are matched, and can vary wildly from one powersupply to another due to component variations.
Finally, classical power factor correction applies to sinusoidal wave forms, as you've already discovered switched mode supplies waveforms on both the input and the output side are anything but sinusoidal further complicating an already hairy problem.
The article isn't really arguing that one hit wonders are something new, or that people are entitled to success. It's arguing that there is a new dynamic in play. Due to the fact that she exploded so quickly as a meme from basically obscurity, no one was invested in her as an artist. The argument is that as a result of new media and technology, it's possible for a song to skyrocket without anyone following the artist themselves.
If you read on, the author points out the album she put out could be a contender for best pop album of the year, but not enough people care about her as an artist enough to check it out.
I find that pretty interesting. It's almost like a new take on the 15 minutes of fame idea, but now your 15 minutes, for whatever you did, make you famous to EVERYONE, but that has no affect on you sticking around in people's minds afterward....
There is no new dynamic in play, this has been happening to just about every one hit wonder since recording was invented. Some stay, most go.
The idea here is that follow-through has to be made by everybody, artist, label and so on to take an opportunity like this and to turn it into a marketable brand.
Which I'd never heard of before running into it on digg, it's musically mediocre, has a fantastic video but ultimately failed to convince me to go out and buy an album.
The internet can't be given credit for the success and it can't be given the blame for subsequent failure, it is just a tool and how you use that tool is up to you.
Caring about an artist means to go and visit their concerts, becoming a successful artist means that you're going to have to slog through the 'building a loyal fan base' trough of sorrow somehow and as far as I know outside of throwing huge marketing budgets after fan acquisition (talent optional) there are no short-cuts there.
By the way, a similar issue faces websites, you may be able to get that front page on HN with your new offering but that won't make much difference in the long run, you'll still have to have the staying power and the determination to see it through for a long period without being sure if it is going to work out or not. And if you do the chances of it working out go up.
Overnight success is not a right, and if you've been handed a free head-start you can't blame your tools if it subsequently does not pan out the way you intended.
> There is no new dynamic in play, this has been happening to just about every one hit wonder since recording was invented. Some stay, most go.
The last 4 paragraphs of the article go on to describe exactly what the new dynamic is. In addition to the normal bombardment of hearing the song all the time, it is the additional flood of memes that the internet and social media bring about. As the article says:
> In the past, the worst thing that could happen to the Song of the Summer was it being played to death. But in the digital age, the pitfalls are boundless. As “Call Me Maybe” is increasingly meme-ified, it runs the risk of becoming completely mummified.
and
> Simply put: when you think of “Somebody That I Used to Know,” you’re less likely to think of the man, Wally de Backer, but the meme.
Both of you arguing about something that isn't even the main point of the article.
Yes, but it is the flood of memes that allowed this person to get most of the exposure to begin with. The point I'm trying to - unsuccessfully it seems - make is that exposure is not enough. If it were then one hit wonders would not be one hit wonders.
You can't ride on a wave and then complain that others are trying to ride that wave too, you especially can't blame the wave.
More choice means fewer monolithic megahits. It's like television. In the days when there were only three networks, a hit show might have a Nielsen rating of 60, meaning that 60 percent of all the people watching television in the United States were watching the show. Millions of viewers. Millions. Nowadays, the top-rated show might have a rating of 12.
The future of music is very likely to be more one-hit wonders, with a few artists who attain long-lasting (but modest) success. There's just too much choice. This is largely a good thing, IMO, especially for the artists as a group (obviously it's not as good for the few who would have been anointed as the kings and queens).
Indeed, there are plenty of other bands/stars that have turned their internet fame into lasting success. For example, OK Go is still doing just fine, they found a niche and became really popular.
I agree with everything you said, but I don't think the article is blaming the internet for anything or even arguing that it was anyone's fault but the artist (and her management). It's mostly just the author pointing out how remarkable it is that the artist did have such a huge head start and was only able to capitalize on a small fraction of it.
Well, maybe if MTV hadn't ditched music programming for crappy reality shows, they'd have had a good way of investing in her as an artist if they felt that was warranted.
"If you read on, the author points out the album she put out could be a contender for best pop album of the year, but not enough people care about her as an artist enough to check it out."
So essentially, she has a few fans but did not make a very good pop album, let alone best of the year.
Well, I guess that depends on "best of the year" definition. In the article, the author is arguing it's best as in quality of the music, not best as in number of units sold...
Given that she's aimed straight at the teenage demographic it looks like all this is is an attempt at doing a female re-run of Justin Bieber.
The fact that she signed on with his manager and performs at his concerts, attempts to look 15 when she's 26 and so on give a clear indication of what it is that they're trying to achieve here and I'm frankly not surprised that it didn't work.
It doesn't have much to do with her being 'best of the year', and you'd have to discount whatever success she does have in the domestic market by the Canadian laws that give air-time preference to Canadian born artists.
From a source such as HN on something targeted like this I'd expect it to be much lower (<50%) the rate observed may be due to slow load times or landing page design or some other factor. There definitely seems to be some disconnect between expectations and what was found on the other side of the click.
Possibly someone that did a comparable launch on HN could give exact figures if they're willing to part with them then you could compare notes to see what works and what does not.
These can be tricky things to dig up, sometimes eye tracking is a solution to figure out where the disconnect is. There was a HN start-up working in this field (GazeHawk) but they've been acquired by facebook.