Simple recursive raise to power

AS
Ackshaey Singh
Difficulty Easy
Time Complexity
O(n)
Space Complexity
O(n)
Number Recursion
Firecode

You're warming up for a phone screen and the interviewer asks you to implement exponentiation from scratch, without reaching for Math.pow or the ^ operator. It sounds almost too simple, but this problem is a perfect litmus test for whether you truly understand recursion. Can you identify the base case, the reduction step, and reason about the call stack? Those skills carry directly into harder recursive problems like tree traversals, backtracking, and divide-and-conquer algorithms.

TL;DR

Raise a number to a power recursively by multiplying base by the result of simplePow(base, exponent - 1), with a base case that returns 1 when the exponent reaches 0. This runs in O(n) time and O(n) space where n is the exponent, since each recursive call adds one frame to the stack. The solution handles negative bases correctly because multiplication preserves sign through the recursion.

Why This Problem Matters

Exponentiation is one of the cleanest examples of a recursive definition mapping directly to code. The mathematical identity base^n = base * base^(n-1) with base^0 = 1 translates almost verbatim into a function. If you can implement this cleanly, you demonstrate that you understand the three pillars of recursion: a base case that stops the calls, a reduction step that moves toward it, and trust that the recursive call returns the correct sub-result.

Understanding the Problem

Let's look at what we need to build:

Given: Two integers, base and exponent Return: base raised to the power of exponent Constraints:

  • Use recursion with at most linear time and space
  • No ^ operator or Math.pow
  • -10base10, 0exponent10

Here are some examples:

simplePow(2, 3)  -> 8
simplePow(3, 3)  -> 27
simplePow(-3, 3) -> -27
simplePow(5, 0)  -> 1
simplePow(0, 5)  -> 0

The last two cases are worth noting. Any number raised to the power of 0 is 1, and 0 raised to any positive power is 0. Both fall out naturally from the recursive structure without special handling.

Edge Cases to Consider

  1. Exponent is 0: Always returns 1 (the base case)
  2. Base is 0: Returns 0 for any positive exponent
  3. Negative base: Works correctly since multiplication handles sign
  4. Large results: 10^9 produces 1,000,000,000 which requires a long in Java

Solution Approach

The key insight is that exponentiation has a naturally recursive definition. Think about what 2^4 actually means:

2^4 = 2 * 2^3
    = 2 * (2 * 2^2)
    = 2 * (2 * (2 * 2^1))
    = 2 * (2 * (2 * (2 * 2^0)))
    = 2 * (2 * (2 * (2 * 1)))

Each line peels off one multiplication and delegates the rest to a smaller version of the same problem. That's recursion in its purest form.

The Recursion Tree

Let's visualize how simplePow(2, 4) unfolds. Each node represents a recursive call, and the annotations show the value returned as results propagate back up the chain:

Loading visualization...

The calls go down the chain until the exponent hits 0. Then the base case returns 1, and each frame multiplies by base as it unwinds back to the original caller.

Building the Algorithm

Two pieces are all we need:

  1. Base case: When exponent == 0, return 1. This stops the recursion and provides the multiplicative identity that starts the chain of multiplications on the way back up.

  2. Recursive case: Return base * simplePow(base, exponent - 1). This multiplies base by whatever the smaller subproblem returns, shrinking the exponent by 1 each time.

Prefer a different language? Jump to solutions in other languages.

Implementation

public class Solution {
  public long simplePow(int base, int exponent) {
    // Base case: any number raised to the power of 0 is 1
    if (exponent == 0) {
      return 1L;
    }

    // Recursive case: base^n = base * base^(n-1)
    return (long) base * simplePow(base, exponent - 1);
  }
}

The cast to long before multiplication is important. Without it, 10 * simplePow(10, 8) would overflow an int during the intermediate multiplication even though the method returns long. Casting base to long first ensures the entire multiplication chain stays in 64-bit arithmetic.

Tracing Through an Example

Let's walk through simplePow(3, 3) step by step:

Loading visualization...

The highlighted path shows how the return value of 1 flows upward, getting multiplied by 3 at each level: 1, 3, 9, 27.

Call 1: simplePow(3, 3) - exponent is not 0, so return 3 * simplePow(3, 2) Call 2: simplePow(3, 2) - exponent is not 0, so return 3 * simplePow(3, 1) Call 3: simplePow(3, 1) - exponent is not 0, so return 3 * simplePow(3, 0) Call 4: simplePow(3, 0) - exponent IS 0, return 1

Now the stack unwinds:

  • Call 3 returns 3 * 1 = 3
  • Call 2 returns 3 * 3 = 9
  • Call 1 returns 3 * 9 = 27

Handling Negative Bases

Negative bases work without any special logic. The sign simply flips with each multiplication:

Loading visualization...

For simplePow(-3, 3): the base case returns 1, then we get -3 * 1 = -3, then -3 * -3 = 9, then -3 * 9 = -27. Odd exponents produce negative results and even exponents produce positive results, exactly as expected.

Complexity Analysis

Time Complexity: O(n)

Each recursive call does constant work (one comparison and one multiplication) and decrements the exponent by 1. We make exactly n calls where n is the exponent, so the total work is O(n).

Space Complexity: O(n)

Each recursive call adds a frame to the call stack, and we go n levels deep before hitting the base case. All n frames exist on the stack simultaneously at peak depth, giving O(n) space. An iterative version could achieve O(1) space, and an exponentiation-by-squaring approach could achieve O(log n) for both time and space.

Common Pitfalls

  1. Forgetting the long cast in Java: Writing base * simplePow(base, exponent - 1) without casting base to long first. The multiplication happens in int arithmetic before the result is widened, causing overflow for large values.

  2. Wrong base case return value: Returning 0 instead of 1 when exponent == 0. This causes every result to be 0, since everything eventually gets multiplied by the base case value.

  3. Not decrementing the exponent: Writing simplePow(base, exponent) instead of simplePow(base, exponent - 1) in the recursive call. This creates infinite recursion and a stack overflow.

  4. Handling exponent 0 after multiplication: Checking the base case after doing the multiplication rather than before. The base case must be the first thing checked.

Interview Tips

  1. Start by stating the recurrence: "base to the n equals base times base to the n minus 1, and anything to the 0 is 1." This shows you see the recursive structure immediately.

  2. Mention the iterative alternative: After implementing the recursive version, note that a simple loop with an accumulator would use O(1) space. This shows awareness of the tradeoff.

  3. Bring up exponentiation by squaring: If the interviewer asks about optimization, explain that base^n = (base^(n/2))^2 reduces the problem to O(log n). This is often a follow-up question.

  4. Watch for overflow: In Java, proactively mention the long cast. In Python, integers have arbitrary precision so this is not an issue.

Key Takeaways

  • The mathematical identity base^n = base * base^(n-1) with base^0 = 1 translates directly into a recursive function with a base case and a single recursive call.
  • The call stack grows to depth n before unwinding, giving both O(n) time and O(n) space where n is the exponent value.
  • Cast base to long before multiplication in Java to prevent integer overflow during intermediate computations.
  • Negative bases require no special handling since multiplication naturally preserves and flips sign through the recursive chain.
  • This linear approach is a stepping stone to the O(log n) exponentiation-by-squaring technique, which halves the exponent at each step instead of decrementing by 1.

Practice Makes Perfect

Once you have this pattern down, try these related problems to strengthen your recursion skills:

  • Nth Fibonacci number with recursion (another classic base-case-plus-reduction problem)
  • Binary search (recursion with halving instead of decrementing)
  • Palindrome tester (recursion on string boundaries)

This problem and hundreds of others are available on Firecode, where over 50,000 engineers have prepared for technical interviews and landed roles at top tech companies. Building a strong recursion foundation here will pay off when you hit harder problems like tree traversals and dynamic programming.

Solutions in Other Languages

Python

class Solution:
    def simple_pow(self, base: int, exponent: int) -> int:
        if exponent == 0:
            return 1
        return base * self.simple_pow(base, exponent - 1)

Python handles arbitrary-precision integers natively, so there is no overflow concern here.

JavaScript

class Solution {
  simplePow(base, exponent) {
    if (exponent === 0) return 1;
    return base * this.simplePow(base, exponent - 1);
  }
}

TypeScript

class Solution {
  simplePow(base: number, exponent: number): number {
    if (exponent === 0) return 1;
    return base * this.simplePow(base, exponent - 1);
  }
}

C++

class Solution {
public:
  int simplePow(int base, int exponent) {
    if (exponent == 0) {
      return 1;
    }
    return base * simplePow(base, exponent - 1);
  }
};

Go

The Go solution was not available for this problem at the time of writing.

Scala

class Solution {
  def simplePow(base: Int, exponent: Int): Long = {
    if (exponent == 0) return 1L
    base.toLong * simplePow(base, exponent - 1)
  }
}

Scala uses toLong on the base to prevent integer overflow, similar to the Java cast.

Kotlin

class Solution {
  fun simplePow(base: Int, exponent: Int): Long {
    if (exponent == 0) {
      return 1L
    }
    return base.toLong() * simplePow(base, exponent - 1)
  }
}

Swift

class Solution {
    func simplePow(_ base: Int, _ exponent: Int) -> Int64 {
        if exponent == 0 {
            return 1
        }
        return Int64(base) * simplePow(base, exponent - 1)
    }
}

Rust

impl Solution {
    pub fn simple_pow(&self, base: i32, exponent: i32) -> i64 {
        if exponent == 0 {
            return 1;
        }
        base as i64 * self.simple_pow(base, exponent - 1)
    }
}

Ruby

class Solution
  def simple_pow(base, exponent)
    return 1 if exponent == 0
    base * simple_pow(base, exponent - 1)
  end
end

Like Python, Ruby has arbitrary-precision integers so overflow is not a concern.

Dart

class Solution {
  int simplePow(int base, int exponent) {
    if (exponent == 0) {
      return 1;
    }
    return base * simplePow(base, exponent - 1);
  }
}

C#

public class Solution {
    public long simplePow(int baseNum, int exponent) {
        if (exponent == 0) {
            return 1L;
        }
        return (long) baseNum * simplePow(baseNum, exponent - 1);
    }
}

Note that C# uses baseNum as the parameter name since base is a reserved keyword.

PHP

class Solution {
    public function simplePow(int $base, int $exponent): int {
        if ($exponent == 0) {
            return 1;
        }
        return $base * $this->simplePow($base, $exponent - 1);
    }
}

Frequently Asked Questions

Why does the recursive power function use O(n) space?
Each recursive call adds a frame to the call stack. Since we make one call per decrement of the exponent, the stack grows to depth n before the base case returns and frames start popping. This gives O(n) space usage proportional to the exponent value.
How is this different from the O(log n) exponentiation by squaring approach?
The linear approach multiplies the base once per recursive call, reducing the exponent by 1 each time. Exponentiation by squaring halves the exponent at each step by using the identity base^n = (base^(n/2))^2, achieving O(log n) time and space. The linear version is simpler to implement but less efficient for large exponents.
Why do we cast to long in the Java solution?
The base and exponent are int parameters, but repeated multiplication can overflow the 32-bit int range. For example, 10^9 is 1,000,000,000 which fits in an int, but 10^10 does not. Casting to long before multiplying prevents silent overflow and gives correct results for the problem's constraint range.
What happens when the base is 0?
When base is 0 and exponent is positive, the function correctly returns 0 because every recursive step multiplies by 0. When both base and exponent are 0, the function returns 1 because it hits the base case (exponent == 0) before any multiplication occurs. Mathematically 0^0 is debated, but returning 1 is the common convention in computing.
Can this approach handle negative exponents?
Not as written. The problem constrains the exponent to non-negative values (0 to 10). Supporting negative exponents would require returning a floating-point result (1.0 / base^|exponent|) since negative exponents produce fractions. The recursive structure would remain the same, just with an additional check at the top.
Why is recursion a natural fit for exponentiation?
Exponentiation has a clear recursive definition: base^n = base * base^(n-1), with base^0 = 1. This maps directly to a recursive function with a base case and a reduction step. The mathematical definition itself is recursive, making the code a near-literal translation of the formula.
What is the time complexity of this recursive approach?
The time complexity is O(n) where n is the exponent. Each recursive call performs one multiplication and decrements the exponent by 1, so we make exactly n calls before reaching the base case. Each call does O(1) work, giving O(n) total.
How would you convert this recursive solution to an iterative one?
Replace the recursion with a loop that multiplies an accumulator variable by the base, exponent-many times. Initialize result to 1, loop from 0 to exponent, multiply result by base each iteration, and return result. This preserves O(n) time but reduces space to O(1) since no call stack is needed.