The fundamental speed hierarchy, bit shifting < addition < multiplication < division, is true for all languages. However, it is still important to know the speed difference between operations. Furthermore, the basic operations have many different flavors in which they can be written. All of these trials compare basic operation speed, alternative syntax, and mathematical substitutions.
Comparing verbose notation to assignment notation, 1,000,000 computations per trial. Run the benchmarks: verbose notation, assignment notation
Division  Relative Speed  Average  Std. Dev.  

Computer A  a = a / b  —  0.1791  0.0094 
a /= b  —  0.1762  0.0122 
Multiplication  Relative Speed  Average  Std. Dev.  

Computer A  a = a * b  —  0.1645  0.0080 
a *= b  —  0.1603  0.0071 
Addition  Relative Speed  Average  Std. Dev.  

Computer A  a = a + b  —  0.1571  0.0040 
a += b  —  0.1603  0.0092 
I always assumed that the short hand assignment would be faster since it has one less variable reference. While there is a slight difference in speed, it is less than the standard deviation and all trials have the same maximum. This was true across all computers. The reason that these trials are 10 times slower than the following trials is that the loop is calling a function to avoid boundary issues that would happen with dividing or multiplying a number 1,000,000 times. Division is so costly because of all of the decimal remainders that are calculated for every computation, similarly addition and multiplication do get slower for larger numbers. All of these calculations were with the same 2 digit numbers to have a constant operation speed and isolate speed differences between syntax. Conceivably, there could be a difference in speed dwarfed by the expense of a function call, but that difference would be small enough to treat as insignificant.
Comparing the speed of division, multiplication and bit shifting, 10,000,000 computations per trial. Run the benchmarks: operations with integers, operations with numbers, operations using numbers and casting
Division  Relative Speed  Average  Std. Dev.  

Computer A  a = int(b / c)  + 12%  0.1957  0.0161 
a = b / c : integers  + 23%  0.2153  0.0096  
a = b / c : Numbers  —  0.1752  0.0064  
Computer B  a = int(b / c)  + 285%  0.3005  0.0156 
a = b / c : integers  —  0.078  0  
a = b / c : Numbers  + 8%  0.0843  0.0158  
Computer C  a = int(b / c)  + 234%  0.314  0.0037 
a = b / c : integers  + 334%  0.4081  0.0409  
a = b / c : Numbers  —  0.094  0  
Computer D  a = int(b / c)  + 225%  0.103  0.0073 
a = b / c : integers  + 385%  0.154  0.0055  
a = b / c : Numbers  —  0.0317  0.0005 
Multiplication  Relative Speed  Average  Std. Dev.  

Computer A  a = int(b * c)  + 275%  0.1477  0.0198 
a = b * c : integers  —  0.0395  0.008  
a = b * c : Numbers  + 12%  0.0441  0.0151  
Computer B  a = int(b * c)  + 460%  0.1748  0.0118 
a = b * c : integers  —  0.0312  0.0004  
a = b * c : Numbers  + 126%  0.0831  0.0156  
Computer C  a = int(b * c)  + 210%  0.2915  0.0159 
a = b * c : integers  —  0.0936  0.0005  
a = b * c : Numbers  —  0.094  0  
Computer D  a = int(b * c)  + 146%  0.0780  0 
a = b * c : integers  —  0.269  0.0090  
a = b * c : Numbers  + 18%  0.0317  0.0004 
Bit Shifting  Relative Speed  Average  Std. Dev.  

Computer A  a = int(b) << int(c)  + 645%  0.2937  0.0308 
a = b << c : integers  —  0.0395  0.008  
a = b << c : Numbers  + 1,860%  0.6954  0.0097 
Flash holds to the basic speed hierarchy that bit shifting < addition < multiplication < division; and bit shifting, addition, and multiplication are much closer in speed than I anticipated. Bit shifting expects integers, so its poor results with Number types is expected, these results were consistant across all computers. Computer A performed the worst when using number types for bit shifting; however, even with casting all computers were atleast 5 times (500%) slower with numbers. While it is not as fast as using Number types to begin with, this is also a prime example of how casting can save you some processor time. There is a benchmark example working with arrays that proves casting an object to its current type has essentially no cost.
Addition promotion concerns and Iterator speed, 10,000,000 computations per trial. Run the benchmarks: adding, iterating
Addition Promotion  Relative Speed  Average  Std. Dev.  

Computer A  All Numbers  —  0.0346  0.0062 
All Integers  —  0.0358  0.0068  
Mixed  + 227%  0.1146  0.0074 
Iterator  Relative Speed  Average  Std. Dev.  

Computer A  Unsigned Integer  —  0.0331  0.0054 
Integer  —  0.0321  0.0040  
Number  + 260%  0.1154  0.0078 
Flash does this thing called ‘promotion’ where if it is unsure of what type to return it will default to the more complex type, this happends when you mix types in opperations, which must include the ++ operator. When Flash is unsure of the type it takes it a while to figure out what it should cast the object to, which is why casting is so important. Results were comparable across all computers, on mixed type addition the percentage increase in time ranged from +200% to +500%, number type iterating ranged from +20% to +260%.
Comparing Unary, Postfix, and Verbose statements, 10,000,000 computations per trial. Run the benchmark: All the ways you can add by 1
Operation  Relative Speed  Average  Std. Dev.  

Computer A  ++x  —  0.0363  0.0075 
x++  —  0.0374  0.0078  
x = x + 1  —  0.0427  0.0071 
I had a professor that claimed ++x was faster than x++, and he appears to be correct, but not enough to significantly matter.
Lessons Learned:
 Never divide unless you absolutely need to.
 If you are unsure about the type an operation returns, cast.
 Stick to integer types and postfix for your iterators.
 Always type your variables, besides being easier to read and debug, it is much faster.
Be sensible, all of these operations are in the 10,000,000 range and are among the fastest operations any language can perform. Addition, multiplication, and bit shifting would probably need 100,000,000 or 1,000,000,000 computations per trial before we would see a sizable speed difference; relatively they are so fast that they just are not worth worrying about. Choosing the right data types and operations will speed up you programs but they live in the subset of the fastest code. Unless you are doing an incredible volume of mathematical operations, your desired target of the slowest running code is something else.
Continue to Data Structure Speed
This post is part of a series:
Introduction
Methodology
Operation Speed <– You are here
Data Structure Speed
Graphics Speed
Results
