Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about computing any Fibonacci number in constant time? This can be done using the eigendecomposition trick for computing powers of matrices: http://minireference.com/static/excerpts/fibonacci_in_consta...

caveat: will fail for large N due to limited precision of floats... works fine on paper though.

[update] caveat 2: assumes a model of computation where ϕ^N can be computed in constant time, which only works on paper.



Regarding the caveat, computing ϕ^N is really equivalent to computing a matrix power M^N. Recall that ϕ=(1+√5) (I dropped the 1/2 for convenience.) Suppose we represent numbers like x+y√5 as [x y]. Then multiplication by ϕ corresponds to multiplication by the matrix

ϕ_mult = [[1 5] [1 1]]

This is because ϕ(x+y√5)=(x+5y)+(x+y)√5, and

[[1 5] [1 1]] [x y] = [x+5y x+y]

Now, to compute ϕ^N, we just have to compute [x y] = ϕ_mult^N[1 0] (since [1 0] is the representation of 1=1+0√5). Then ϕ^N=x+y√5. This is all exact, no floating point arithmetic!

It's actually just an example of an algebraic number field [1] -- the idea is that "algebraic numbers" (i.e., roots of polynomials) can be equivalently thought of as vectors and matrices. The simplest example, which everyone is familiar with, is the complex numbers, where i=√(-1). In this case, [x y] represents x+y*i, and multiplication by i corresponds to the matrix i_mult=[[0 1] [-1 0]]. Then i_mult^2= -[[1 0] [0 1]], in other words, multiplication by -1!

[1] http://en.wikipedia.org/wiki/Algebraic_number_field


Raising numbers to large powers is still O(log n), though. Even if it's a single term when written out as an equation.


Thx for pointing this out. I updated the comment. It seems O(log n) is as low as it will go, still I find it to be a neat (theoretical) trick.


Where 'large' is about 80.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: