Ask A Scientist

Mathematics Archive


Functional Equation



Question: Is there any way to approximate the solution of f(x) given 
f(f(x))=g(x)? i.e., if f(f(x))=x^4, then f(X)=x^2?  I would like any method 
for approximating the solution to f(f(x))=e^x.
------------------------------------------------
Hey, this is a really interesting problem!  First off, there is no 
unique solution to the problem as stated.  For example, even for the identity 
under composition, g(x)=x, ANY f(x) that just transposes x values will work.  
For example, -x, or 1/x will do just as well as f(x) = x. f(x) = x is unique, 
however, under the requirement that f be monotonic increasing (which only 
makes sense because g is monotonic increasing).  However, even monotonicity is 
not enough to determine f uniquely if g increases too fast, and for g(x) = 
exp(x).  I suspect that you need to require all derivatives to be positive, or 
maybe something even stronger (any clues out there?).  This non uniqueness is 
a little surprising, since there are "obvious" solutions in many cases.  For 
example, for g(x) = x^p, f(x) = x^sqrt(p) is the obvious solution.  For g(x) = 
exp(exp(x)),  f(x) = exp(x) is the obvious solution.  So, is there an obvious 
solution for g(x) = exp(x)?  There must be in some sense, and maybe the 
condition on derivatives is the most sensible.  But, I can construct a 
solution in an essentially arbitrary manner.  Note that the solution must (for 
large x) rise faster than any power, and yet slower than any exponential of a 
(positive) power of x.  So this f(x) is somewhere between power-law and 
exponential growth for large x.  For x close to 0, exp(x) is approximately 1 + 
x.  For g(x) = 1+x, the (obvious) solution is f(x) = 1/2 + x.  So for x in the 
range 0 to 1/2, define f(x) = 1/2 + x.  This then defines f(x) for all x!  To 
see that this is so, consider f(0) = 1/2.  Then f(f(0)) = exp(0) = 1 is 
satisfied because f(1/2) = 1.  Then f(1) = f(f(1/2)) = sqrt(e) is immediately 
defined, and so is f(y) in the interval: y in [1/2, 1] => f(y) = f(f(y - 1/2)) 
= exp(y - 1/2).  Then: y in [1, sqrt(e)] => f(y) = f(f(log(y) + 1/2)) = 
y*sqrt(e), so f(sqrt(e)) = e.  Then: y in [sqrt(e), e] => f(y) = 
f(f(y/sqrt(e))) = exp(y/sqrt(e)) so f(e) = e^sqrt(e).  Etc.  The result is two 
alternating sequences of intervals in which f is defined by (ever more 
complicated) combinations of elementary functions.  In the first set of 
intervals (successive exponentiation of [0, 1/2]), f(x) follows the sequence: 
x + 1/2, sqrt(e) x,  x^(sqrt(e)),  exp(log(x)^sqrt(e)), or in general 
(defining exp^(n)(x) to be  exp(exp(...(exp(x))...)) with n exps, and 
log^(n)(x).  Similarly, f(x) = exp^(n)((log^(n)(x))^sqrt(e)).  This 
effectively bounds the function of interest from below by powers and faster 
rising functions.  In the second set of intervals (successive exponentiation 
of [1/2,1]) f(x) follows the sequence: e^(x - 1/2), 
e^(x/sqrt(e)),e^(x^1/sqrt(e)), e^(e^(log(x)^1/sqrt(e))), or in general 
exp^(n+1)((log^(n)(x))^1/sqrt(e)), effectively bounding the function of 
interest from above by exponentials of powers and more slowly rising 
functions. (Note that these general expressions work for negative n, realizing 
that exp^(-n)(x) = log^(n)(x)).  i.e., I would guess that for large x, 
exp^(n)((log^(n)(x))^sqrt(e)) < f(x) 1) {shiftedx = log(shiftedx); nxr++:} 
   if (shiftedx < 0.5) { fshift = shiftedx + 0.5;} 
   else {fshift = exp(shiftedx - 0.5);} 
   if (neg == 1) {res = log(fshift);} else {res = fshift; 
   for (i=0; i < nxr; i++) {res = exp(res);

I looked this up, and it turns out that the idea of "fractional iteration" is 
quite an old one, and has been solved somewhat satisfactorily.  I could not 
find any explicit solution for g(x) = exp(x), however, so I will comment on 
that later.  The general method of solution (when it can be done) is to try to 
find a solution to "Abel's equation": phi(g(x)) = phi(x) + 1.  Then the 
fractional iterate f(x) = g^k(x) (k = 1/2 in our example, but it could be 
anything) is given by:  f(x) = phi^-1 ( phi(x) + k).  The solution given above 
for g(x) = exp(x) corresponds to taking phi(x) = 1 + x in the interval x = 0 
to 1, and applying Abel's equation to find phi(x) everywhere else.  This turns 
out to be an excellent approximation to the true (infinitely differentiable) 
phi(x).  In fact, we can satisfy n derivatives of Abel's equation at x = 0 by 
expressing phi as a n+2'th order polynomial between 0 and 1, and this appears 
to converge to a power series:  phi(x) approx = 1 + 0.9159456 x + 0.2493525 
x^2 -0.11046 x^3  + ... Unfortunately this power series seems to have only a 
finite radius of convergence, but it does include the endpoints 0 and 1.  The 
straight line 1+x is very close to this function.
Arthur Smith
=========================================================



Back to Mathematics Ask A Scientist Index
NEWTON Homepage Ask A Question

NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators.
Argonne National Laboratory, Division of Educational Programs, Harold Myron, Ph.D., Division Director.