The XOR trick was sometimes useful in the past, on weird CPUs that had non-equivalent registers and which also lacked register exchange instructions (the Intel/AMD CPUs have non-equivalent registers, but they have a register exchange instruction, so they do not need this trick).
The XOR trick is not useful on modern CPUs for swapping memory blocks, because on modern CPUs the slowest operations are the memory accesses and the XOR trick needs too many memory accesses.
For swapping memory, the fastest way needs 4 memory accesses: load X, load Y, store X where Y was, store Y where X was. Each "load" and "store" in this sequence may consist of multiple load or store instructions, if multiple registers are used as the intermediate buffer. Ideally, an intermediate register buffer matching the cache line size should be used, with accesses aligned to cache lines.
Hopefully, std::rotate is written in such a way that it is compiled into such a sequence of machine instructions.
It is a chemoautotrophic system, but it is not independent of the sun and of photosynthetic life.
This is a hugely erroneous claim that is much too frequently encountered in the popular publications.
Both in this cave and in hydrothermal vents, most autotrophic bacteria use free oxygen to oxidize the hydrogen sulfide, producing thus the energy needed for autotrophy.
The free oxygen comes from the phototrophic algae and plants (located elsewhere), i.e. from solar energy.
On Earth, there are only 2 kinds of autotrophic bacteria and archaea that may be independent from solar energy, the acetogenic bacteria and archaea and the methanogenic archaea. Both kinds obtain energy from free hydrogen and carbon dioxide, the former producing acetic acid and the latter producing methane.
These 2 kinds of bacteria and archaea need free hydrogen and most of them are killed by free oxygen. Sometimes the free hydrogen is produced by fermentation of organic substances, like in our intestines, so also coming from solar energy, but free hydrogen is also produced by the oxidation of volcanic rocks by water, when its origin is independent of solar energy and dependent only on the internal heat of the Earth, which produces volcanic rocks that are in chemical equilibrium at high temperatures deep inside the Earth's mantle, but they are no longer in chemical equilibrium after reaching the cold surface of the Earth.
Thus deep underground or in certain places on the bottom of the oceans, where free dihydrogen is abundant and there exists no free dioxygen, there are communities of acetogenic and methanogenic bacteria and archaea that are independent of solar energy, but this is not the case for this cave and for many of the hydrothermal vents, where both hydrogen sulfide and free dioxygen are abundant, so aerobic bacteria are dominant.
Anywhere where there is either air or water with dissolved dioxygen, the living beings use the most efficient energy source, i.e. the oxidation of either organic or anorganic substances with the free dioxygen, so they depend on solar energy, even when there is no light in that place.
By this logic, if the CEO of a big company with a revenue of 100 billion $ per year steals every year from the company 100 million $ for gifts to some of his/her family members, that does not matter, because it is just 0.1% of the revenue, and perhaps those family members would use a part of the money to buy products of the same company.
Indeed, for any non-US citizen it is very hard to understand why USA has always paid each year a significant aid for Israel.
For anyone who has worked in Israel or who has just visited it, there is no doubt that Israel is one of the richest countries and it has more than enough of its own resources to ensure that it maintains its military superiority against any neighbors.
Israel certainly does not need a permanent aid for that, though of course they would be fools to refuse the many billions of $ they receive as a gift from USA.
Perhaps this aid might have been justified in the initial years after WWII, but it has been a long time since the initial reason cannot have remained true.
Now USA claims that it may have not obtained benefits commensurate to its expenses in the relations with many other countries, even if it is much less clear which were the benefits obtained by USA for paying this aid to Israel every year.
A part of the money paid to Israel is likely to return to some US companies that are friendly to the US government, so this is an indirect method for giving gifts to those companies too, but in other countries USA has been able to obtain such profitable contracts for well-connected US companies in a much cheaper way, just by bribing or blackmailing the local governments, instead of paying the contracts in full with US money.
While analog video did not have the concept of pixels, it specified the line frequency, the number of visible lines (576 in Europe, composed of 574 full lines and 2 half lines, so some people count them as 575 lines, but the 2 half lines are located in 2 different lines of the image, not on the same line, thus there are 576 distinct lines on the height of the image), the duration of the visible part of a line and the image aspect as being 3:4.
From these 4 values one can compute the video sampling frequency that corresponds to square pixels. For the European TV standard, an image with square pixels would have been of 576 x 768 pixels, obtained at a video sampling frequency close to 15 MHz.
However, in order to allow more TV channels in the available bands, the maximum video frequency was reduced to a lower frequency than required for square pixels (which would have been close to 7.5 MHz in Europe) and then to an even lower maximum video frequency after the transition to PAL/SECAM, i.e. to lower than 5.5 MHz, typically about 5 MHz. (Before the transition to color, Eastern Europe had used sharper black&white signals, with a lower than 6.5 MHz maximum video frequency, typically around 6 MHz. The 5.5/6.5 MHz limits are caused by the location of the audio signal. France had used an even higher-definition B&W system, but that had completely different parameters than the subsequent SECAM, being an 819-line system, while the East-European system differed only in the higher video bandwidth.)
So sampling to a frequency high enough for square pixels would have been pointless as the TV signal had been already reduced to a lower resolution by the earlier analog processing. Thus the 13.5 MHz sampling frequency chosen for digital TV, corresponding to pixels wider than their height, was still high enough to preserve the information contained in the sampled signal.
No, the reason why 13.5 MHz was chosen is because it was desirable to have the same sampling rate for both PAL and NTSC, and 13.5 happens to be an integer multiple of both line frequecuencies. You can read the full history in this article:
That is only one condition among the conditions that had to be satisfied by the sampling rate, and there are an infinity of multiples which satisfy this condition, so this condition is insufficient to determine the choice of the sampling frequency.
Another condition that had to be satisfied by the sampling frequency was to be high enough in comparison with the maximum bandwidth of the video signal, but not much higher than necessary.
Among the common multiples of the line frequencies, 13.5 MHz was chosen because it also satisfied the second condition, which is the condition that I have discussed, i.e. that it was possible to choose 13.5 MHz only because the analog video bandwidth had been standardized to values smaller than needed for square pixels, otherwise for the sampling frequency a common multiple of the line frequencies that is greater than 15 MHz would have been required (which is 20.25 MHz).
The term "transistor" is much too vague, so the question "who invented the transistor" is ill posed.
There are different kinds of transistors, which use different principles for controlling electrical currents, and which have been invented by different people at different times.
Lilienfeld has invented 2 kinds of transistors: MESFETs (metal-semiconductor field-effect transistors) and depletion-mode MOSFETs (metal-insulator-semiconductor field-effect transistors) (US patents 1,745,175 and 1,900,018). Of these 2 kinds of transistors, the second is rarely used today, while the first is used mostly in special applications for discrete power devices.
The parent article is very wrong in claiming that the transistors invented by Lilienfeld are those that are most frequently used today, which are the enhancement-mode MOSFETs, which have a very different structure and for which much less of the semiconductor materials are suitable than for the simpler transistors invented by Lilienfeld.
The enhancement-mode MOSFET has been invented in 1960 by Martin M. Atalla and Dawon Kahng from Bell Laboratories (US patents 3,206,670 and 3,102,230).
Bardeen and Brattain have discovered the point-contact transistor, which had been used only for a few years before becoming completely obsolete, but which was the first kind of transistor offered as a commercial product.
William Shockley has invented 2 kinds of transistors, which have been very important and which remain the best in some special applications, the bipolar junction transistor (BJT) and the junction field-effect transistor (JFET) (US patents 2,569,347 and 2,744,970). William Shockley has also the huge merit of developing a theory of the physics of semiconductor materials which allowed everybody to design transistors and many other kinds of semiconductor devices.
Therefore the 3 Nobel winners have invented together 3 kinds of transistors and they are rightly called inventors of some of the kinds of transistors, i.e. point-contact, BJT and JFET.
Lilienfeld was the first who conceived some possible structures for a triode that uses a semiconductor instead of vacuum or gas. This was a great advance, but this still resulted logically from the prior knowledge that one can make diodes using either vacuum or gas or a semiconductor, and one can make triodes using either vacuum or gas. Thus he attempted to fill the missing combination.
The Bell Laboratories are truly guilty of trying to hide the fact that their post-war research on making a semiconductor triode had indeed started with the purpose of making alternatives to the devices invented by Lilienfeld, of which they were well aware. However, they eventually invented 4 different kinds of transistors, all of which had structures and principles of operation very different from the 2 Lilienfeld transistors, even if all 6 kinds can be considered as variants of semiconductor triodes.
The first 3 kinds of transistors that have been invented, the 2 Lilienfeld transistors and the point-contact transistor of Bardeen and Brattain, can be made using pieces of a homogeneous semiconductor material. This is why they have been discovered first.
The next 3 kinds of transistors, invented by Shockley and by Atalla with Kahng, contain P-N junctions, so they were invented only after William Shockley had developed the theory of the P-N junctions.
It should be noted that the number e = 2.71828 ... does not have any importance in practice, its value just satisfies the curiosity to know it, but there is no need to use it in any application.
The transcendental number whose value matters (being the second most important transcendental number after 2*pi = 6.283 ...) is ln 2 = 0.693 ... (and the value of its inverse log2(e), in order to avoid divisions).
Also for pi, there is no need to ever use it in computer applications, using only 2*pi everywhere is much simpler and 2*pi is the most important transcendental number, not pi.
This comment is quite strange to me. e is the base of the natural logarithm. so ln 2 is actually log_e (2). If we take the natural log of 2, we are literally using its value as the base of a logarithm.
Does a number not matter "in practice" even if it's used to compute a more commonly use constant? Very odd framing.
The number "e" itself is never needed in any application.
It is not used for computing the value of ln(2) or of log2(e), which are computed directly as limits of some convergent series.
As I have said, there is no reason whatsoever for knowing the value of e.
Moreover, there is almost never a good choice to use the exponential function or the hyperbolic logarithm function (a.k.a. natural logarithm, but it does not really deserve the name "natural").
For any numeric computations, it is preferably to use everywhere the exponential 2^x and the binary logarithm. With this choice, the constant ln 2 or its inverse appears in formulae that compute derivatives or integrals.
People are brainwashed in school into using the exponential e^x and the hyperbolic logarithm, because this choice was more convenient for symbolic computations done with pen on paper, like in the 19th century.
In reality, choosing to have the proportionality factor in the derivative formula as "1" instead of "ln 2" is a bad choice. The reason is that removing the constant from the derivative formula does not make it disappear, but it moves it into the evaluation of the function and in any application much more evaluations of the functions must be done than computations of derivative or integral formulae.
The only case when using e^x may bring simplifications is in symbolic computations with complex exponentials and complex logarithms, which may be needed in the development of mathematical models for some linear systems that can be described by linear systems of ordinary differential equations or of linear equations with partial derivatives. Even then, after the symbolic computation produces a mathematical model suitable for numeric computations it is more efficient to convert all exponential or logarithmic functions to use only 2^x and binary logarithms.
From your other responses in this thread, it looks like you do concede that e is useful in symbolic computation, and others use the phraseology "how the function is implemented", which is quite a silly thing to say in a classical math context, but not in a computational context.
I didn't understand immediately that you were talking about using values related to e in a computational context. But your comment about "brainwashing" seems a bit off. Are you saying that programmers bring e and ln with them into code when more effective constants exist for the same end? That's probably true. But brainwashing is far too strong, since things need to be taught in the correct order in math in order for each next topic to make sense. e really only comes in when learning derivative rules where it's explained "e is a number where when used as the base in an exponential function, that function's derivative is itself." Math class makes no pretense that you ought to use any of it to inform how you write code, so the brainwashing accusation seems off to me.
> It should be noted that the number e = 2.71828 ... does not have any importance in practice, its value just satisfies the curiosity to know it, but there is no need to use it in any application.
In calculations like compound financial interest, radioactive decay and population growth (and many others), e is either applied directly or derived implicitly.
> ... 2*pi is the most important transcendental number, not pi.
When using the exponential e^x or the natural logarithm, the number "e" is never used. Only ln 2 or its inverse are used inside the function evaluations, for argument range reduction.
In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x, which is why this is done frequently even by people who are not aware that the computational cost of 2^x is lower and its accuracy is greater.
In compound financial interest using 2^x would also be much more natural than the use of e^x, but in financial applications tradition is usually more important than any actual technical arguments.
> When using the exponential e^x or the natural logarithm, the number "e" is never used. Only ln 2 or its inverse are used inside the function evaluations, for argument range reduction.
That is only true in the special case of computing a half-life. In the general case, e^x is required. When computing a large number of cases and to avoid confusion, e^x is the only valid operator. This is particularly true in compound interest calculations, which would fall apart entirely without the presence of e^x and ln(x).
> In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x
See above -- it's only valid if a specific, narrow question is being posed.
> In compound financial interest using 2^x would also be much more natural than the use of e^x
That is only true to answer a specific question: How much time to double a compounded value? For all other cases, e^x is a requirement.
If your position were correct, if 2^x were a suitable replacement, then Euler's number would never have been invented. But that is not reality.
No, you did not try to understand what I have written.
The use of ln 2 for argument range reduction has nothing to do with half lives. It is needed in any computation of e^x or ln x, because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.
The argument range reduction can be avoided only if you know before evaluation that the argument is close enough to 0 for an exponential or to 1 for a logarithm, so that an approximation formula can be applied directly. For a general-purpose library function you cannot know this.
Also the use of 2^x instead of e^x for radioactive decay, population growth or financial interest is not at all limited to the narrow cases of doublings or halvings. Those happen when x in an integer in 2^x, but 2^x accepts any real value as argument. There is no difference in the definition set between 2^x and e^x.
The only difference between using 2^x and e^x in those 3 applications is in a different constant in the exponent, which has the easier to understand meaning of being the doubling or halving time, when using 2^x and a less obvious meaning when using e^x. In fact, only doubling or halving times are directly measured for radioactive decay or population growth. When you want to use e^x, you must divide the measured values by ln 2, an extra step that brings no advantage whatsoever, because it must be implicitly reversed during every subsequent exponential evaluation when the argument range reduction is computed.
> The use of ln 2 for argument range reduction has nothing to do with half lives.
That is a false statement.
> In fact, only doubling or halving times are directly measured for radioactive decay or population growth.
That is a false statement -- in population studies, as just one example, the logistic function (https://en.wikipedia.org/wiki/Logistic_function) tracks the effect of population growth over time as environmental limits take hold. This is a detailed model that forms a cornerstone of population environmental studies. To be valid, it absolutely requires the presence of e^x in one or another form.
> ... because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.
That is a spectacularly false statement.
> There is no difference in the definition set between 2^x and e^x.
That is absolutely false, and trivially so.
> No, you did not try to understand what I have written.
On the contrary, I understood it perfectly. From a mathematical standpoint, 2^x cannot substitute for e^x, anywhere, ever. They're not interchangeable.
I hope no math students read this conversation and acquire a distorted idea of the very important role played by Euler's number in many applied mathematical fields.
It took me quite a bit to figure out what you're trying to say here.
The importance of e is that it's the natural base of exponents and logarithms, the one that makes an otherwise constant factor disappear. If you're using a different base b, you generally need to adjust by exp(b) or ln(b), neither of which requires computing or using e itself (instead requiring a function call that's using minimax-generated polynomial coefficients for approximation).
The importance of π or 2π is that the natural periodicity of trigonometric functions is 2π or π (for tan/cot). If you're using a different period, you consequently need to multiply or divide by 2π, which means you actually have to use the value of the constant, as opposed to calling a library function with the constant itself.
Nevertheless, I would say that despite the fact that you would directly use e only relatively rarely, it is still the more important constant.
Pi not multiplied by 2 has only one application, which is ancient. For most objects, it is easier to measure directly the diameter than the radius. Then you can compute the circumference by multiplying with Pi.
Except for this conversion from directly measured diameters, one rarely cares about hemicycles, but about cycles.
The trigonometric functions with arguments measured in cycles are more accurate and faster to compute. The trigonometric functions with arguments measured in radians have simpler formulae for derivatives and primitives. The conversion factor between radians and cycles is 2Pi, which leads to its ubiquity.
While students are taught to use the trigonometric functions with arguments measured in radians, because they are more convenient for some symbolic computations, any angle that is directly measured is never measured in radians, but in fractions of a cycle. The same is true for any angle used by an output actuator. The methods of measurement with the highest precision for any physical quantity eventually measure some phase angle in cycles. Even the evaluations of the trigonometric functions with angles measured in radians must use an internal conversion between radians and cycles, for argument range reduction.
So the use of the 2*Pi constant is unavoidable in almost any modern equipment or computer program, even if many of the uses are implicit and not obvious for whoever does not know the detailed implementations of the standard libraries and of the logic hardware.
If trigonometric functions with arguments measured in radians are used anywhere, then conversions between radians in cycles must exist, either explicit conversions or implicit conversions.
If only trigonometric functions with arguments measured in cycles are used, then some multiplications with 2Pi or its inverse appear where derivatives or primitives are computed.
In any application that uses trigonometric functions millions of multiplications with 2Pi may be done every second. In contrast, a multiplication by Pi could be needed only at most at the rate at which one could measure the diameters of some physical objects for which there would be a reason to want to know their circumference.
Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,
> The trigonometric functions with arguments measured in cycles are more accurate and faster to compute.
Please expand on this. Surely if that were the case, numerical implementations would first convert a radian input to cycles before doing whatever polynomial/rational approximation they like, but I've never seen one like that.
> Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,
Well of course, that's why you have (in C) M_PI, M_PI2, and so on (and in some dialects M_2PI).
> Surely if that were the case, numerical implementations would first convert a radian input to cycles before doing whatever polynomial/rational approximation they like, but I've never seen one like that.
Then you have not examined the complete implementation of the function.
The polynomial/rational approximation mentioned by you is valid only for a small range of the possible input arguments.
Because of this, the implementation of any exponential/logarithmic/trigonometric function starts by an argument range reduction, which produces a value inside the range of validity of the approximating expression, by exploiting some properties of the function that must be computed.
In the case of trigonometric functions, the argument must be reduced first to a value smaller than a cycle, which is equivalent to a conversion from radians to cycles and then back to radians. This reduction, and the rounding errors associated with it, is avoided when the function uses arguments already expressed in cycles, so that the reduction is done exactly by just taking the fractional part of the argument.
Then the symmetry properties of the specific trigonometric function are used to further reduce the range of the argument to one fourth or one eighth of a cycle. When the argument had been expressed in cycles this is also an exact operation, otherwise it can also introduce rounding errors, because adding or subtracting Pi or its submultiples cannot be done exactly.
We start of with a range reduction to [0, pi/4] (presumably this would be [0, 1/8] in cycles), and then the polynomial happens.
If cycles really were that better, why isn't this implemented as starting with a conversion to cycles, then removal of the interval part, and then a division by 8, followed by whatever the appropriate polynomial/rational function is?
> adding or subtracting Pi or its submultiples cannot be done exactly.
I was also assuming that we've been talking about floating point this whole time.
>but there is no need to use it in any application.
Applications such as planes flying, sending data through wires, medical imaging (or any of a million different direct applications) do not count, I assume?
Your naivety about what makes the world function is not an argument for something being useless. The number appearing in one of the most important algorithms should give you a hint about how relevant it is https://en.wikipedia.org/wiki/Fast_Fourier_transform
I am sorry, but comments like this are caused by the naivety of not knowing how the function evaluations are actually implemented.
None of the applications mentioned by you need to use the exponential e^x or the natural logarithm, all can be done using the exponential 2^x and the binary logarithm. The use of the less efficient and less accurate functions remains widespread only because of bad habits learned in school, due to the huge inertia that affects the content of school textbooks.
The fast Fourier transform is written as if it would use e^x, but that has been misleading for you, because it uses only trigonometric functions, so it is irrelevant for discussing whether "e" or "ln 2" is more important, because neither of these 2 transcendental constants is used in the Fast Fourier Transform.
Moreover, FFT is an example for the fact that it is better to use trigonometric functions with the arguments measured in cycles, i.e. functions of 2*Pi*x, instead of the worse functions with arguments measured in radians, because with arguments expressed in cycles the FFT formulae become simpler, all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.
A function like cos(2*Pi*x) is simpler than cos(x), despite what the conventional notation implies, because the former does not contain any multiplication with 2*Pi, but the latter contains a multiplication with the inverse of 2*Pi, for argument range reduction.
I think that perhaps people are conflating the fourier transform (FT) with the fast fourier transform.
It's true that the FFT does not use either of the transcendental numbers e or ln(2), but that's because the FFT does not use transcendental numbers at all! (Roots of unity, sure, but those are algebraic)
> all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.
Doesn't that basically get you a Hadamard transform?
FFT can be done avoiding the use of any transcendental constants, but the conventional formulae for FFT use the transcendental 2Pi both explicitly and implicitly.
The FFT formulae when written using the function e^ix contain an explicit division by 2Pi which must be done either in the direct FFT or in the inverse FFT. It is more logical to put the constant in the direct transform, but despite this most implementations put the constant in the inverse transform, presumably because a few applications use only the direct transform, not also the inverse transform.
Some implementations divide by sqrt(2Pi) in both directions, to enable the use of the same function for both direct and inverse FFT.
Besides this explicit used of 2Pi, there is an implicit division by 2Pi in every evaluation of e^ix, for argument range reduction.
If instead of using e-based exponentials one uses trigonometric functions with arguments measured in cycles, not in radians, then both the explicit use of 2Pi and its implicit uses are eliminated. The explicit use of 2Pi comes from computing an average value over a period, by integration followed by division by the period length, so when the period is 1 the constant disappears. When the function argument is measured in cycles, argument range reduction no longer needs a multiplication with the inverse of 2Pi, it is done by just taking the fractional part of the argument.
>I am sorry, but comments like this are caused by the naivety of not knowing how the function evaluations are actually implemented.
I am sorry, but comments like this are caused by the naivety of not knowing a single things about mathematics.
Do you not understand that mathematics is not just about implementation, but about forming models of reality? The idea of trying to model a physical system while pretending that e.g. the solution of the differential equations x'=x does not matter is just idiotic.
The idea that just because some implementation can avoid a certain constant, that this constant is irrelevant is immensely dumb and tells me that you lack basic mathematical education.
Guessing the original comment hasn't taken complex analysis or has some other oriented view point into geometry that gives them satisfaction but these expressions are one of the most incredible and useful tools in all of mathematics (IMO). Hadn't seen another comment reinforcing this so thank you for dropping these.
Cauchy path integration feels like a cheat code once you fully imbibe it.
Got me through many problems that involves seemingly impossible to memorize identities and re-derivation of complex relations become essentially trivial
Complex exponentials and complex logarithms are useful in some symbolic computations, those involving formulae for derivatives or primitives, and this is indeed the only application where the use of e^x and natural logarithm is worthwhile.
However, whenever your symbolic computation produces a mathematical model that will be used for numeric computations, i.e. in a computer program, it is more efficient to replace all e^x exponentials and natural logarithms with 2^x exponentials and binary logarithms, instead of retaining the complex exponentials and logarithms and evaluating them directly.
At the same time, it is also preferable to replace the trigonometric functions of arguments measured in radians with trigonometric functions of arguments measured in cycles (i.e. functions of 2*Pi*x).
This replacement eliminates the computations needed for argument range reduction that otherwise have to be made at each function evaluation, wasting time and reducing the accuracy of the results.
Even when you use the exponential e^x and the hyperbolic logarithm a.k.a. natural logarithm (which are useful only in symbolic computations and are inferior for any numeric computation), you never need to know the value of "e". The value itself is not needed for anything. When evaluating e^x or the hyperbolic logarithm you need only ln 2 or its inverse, in order to reduce the argument of the functions to a range where a polynomial approximation can be used to compute the function.
Moreover, you can replace any use of e^x with the use of 2^x, which inserts ln(2) constants in various places, (but removes ln 2 from the evaluations of exponentials and logarithms, which results in a net gain).
If you use only 2^x, you must know that its derivative is ln(2) * 2^x, and knowing this is enough to get rid of "e" anywhere. Even in derivation formulae, in actual applications most of the multiplications with ln 2 can be absorbed in multiplications with other constants, as you normally do not have 2^x expressions that are derived, but 2^(a*x), where you do ln(2)*a at compile time.
You start with the formula for the exponential of an imaginary argument, but there the use of "e" is just a conventional notation. The transcendental number "e" is never used in the evaluation of that formula and also none of the numbers produced by computing an exponential or logarithm of real numbers are involved in that formula.
The meaning of that formula is that if you take the expansion series of the exponential function and you replace in it the argument with an imaginary argument you obtain the expansion series for the corresponding trigonometric functions. The number "e" is nowhere involved in this.
Moreover, I consider that it is far more useful to write that formula in a different way, without any "e":
1^x = cos(2Pi*x) + i * sin(2Pi*x)
This gives the relation between the trigonometric functions with arguments measured in cycles and the unary exponential, whose argument is a real number and whose value is a complex number of absolute value equal to 1, and which describes the unit circle in the complex plane, for increasing arguments.
This formula appears more complex only because of using the traditional notation. If you call cos1 and sin1 the functions of period 1, then the formula becomes:
1^x = cos1(x) + i * sin1(x)
The unary exponential may appear weirder, but only because people are habituated from school with the exponential of imaginary arguments instead of it. None of these 2 functions is weirder than the other and the use of the unary exponential is frequently simpler than of the exponential of imaginary arguments, while also being more accurate (no rounding errors from argument range reduction) and faster to compute.
I want to add that any formula that contains exponentials of real arguments, e^x, and/or exponentials of imaginary arguments, e^(i*x), can be rewritten by using only binary exponentials, 2^x, and/or unary exponentials, 1^x, both having only real arguments.
With this substitution, some formulae become simpler and others become more complicated, but, when also considering the cost of the function evaluations, an overall greater simplicity is achieved.
In comparison with the "e" based exponentials, the binary exponential and the unary exponential and their inverses have the advantage that there are no rounding errors caused by argument range reduction, so they are preferable especially when the exponents can be very big or very small, while the "e" based exponentials can work fine for exponents guaranteed to be close to 0.
They have instructions for memcpy/memmove (i.e. rep movs), not for strcpy.
They also have instructions for strlen (i.e. rep scasb), so you could implement strcpy with very few instructions by finding the length and then copying the string.
Executing first strlen, then validating the sizes and then copying with memcpy if possible is actually the recommended way for implementing a replacement for strcpy, inclusive in the parent article.
On modern Intel/AMD CPUs, "rep movs" is usually the optimal way to implement memcpy above some threshold of data size, e.g. on older AMD Zen 3 CPUs the threshold was 2 kB. I have not tested more recent CPUs to see if the threshold has diminished.
On the old AMD Zen 3 there was also a certain size range above 2 kB at sizes comparable with the L3 cache memory where their implementation interacted somehow badly with the cache and using "non-temporal" vector register transfers outperformed "rep movs". Despite that performance bug for certain string lengths, using "rep movs" for any size above 2 kB gave a good enough performance.
No, it's an instruction for memcpy. You still need to compute the string length first, which means touching every byte individually because you can't use SIMD due to alignment assumptions (or lack thereof) and the potential to touch uninitialized or unmapped memory (when the string crosses a page boundary).
The XOR trick is not useful on modern CPUs for swapping memory blocks, because on modern CPUs the slowest operations are the memory accesses and the XOR trick needs too many memory accesses.
For swapping memory, the fastest way needs 4 memory accesses: load X, load Y, store X where Y was, store Y where X was. Each "load" and "store" in this sequence may consist of multiple load or store instructions, if multiple registers are used as the intermediate buffer. Ideally, an intermediate register buffer matching the cache line size should be used, with accesses aligned to cache lines.
Hopefully, std::rotate is written in such a way that it is compiled into such a sequence of machine instructions.
reply