# Two summation

We want to write a function that takes a non-empty array of distinct integers and an integer representing a target sum. If any two numbers in the input array sum up to the target sum, the function should return them in an array, in any order. If no two numbers sum up to the target sum, the process should return an empty array.

# First Approach

My first approach would be to go through the array of integers in a brute force manner.
Suppose we have an array of numbers `[ 1, 2, 3 ]`

. We need to figure out all the
*two-element combinations* it can have. If we think about it,
we would probably end up doing something like this: -

Conceptually in this approach, we try to achieve a reducing set of *combinations*
for two numbers and do some calculation with it. If we can align this approach as a solution
to our challenge statement, we can write a brute force algorithm like the following: -

- An
**outer loop**which goes through each of element until $n-1$ - An
**inner loop**which goes through $n+1$ - a
**condition**to check the summation

Program Input— Say we have an array`[ -1, 5, -4, 8, 7, 1, 3, 11 ]`

and a target sum`14`

. Now let's transform above steps into a pseudocode.

If we were to execute this algorithm, what are the different combinations we go through? Let's write down the iterations and their respective combinations manually.

Iteration | Checked Combinations |
---|---|

1 | (-1,5), (-1,-4), (-1,8), (-1,6), (-1,1), (-1,3), (-1,11) |

2 | (5,-4), (5,8), (5,6), (5,1), (5,3), (5,11) |

3 | (-4,8), (-4,6), (-4,1), (-4,3), (-4,11) |

4 | (8,6), (8,1), (8,3), (8,11) |

5 | (6,1), (6,3), (6,11) |

6 | (1,3), (1,11) |

7 | (3,11) |

Did you see that? In worst-case scenario we had to evaluate $28$ pairs and in the $7$^{th}
iteration we found our matching number pair $(3, 11)$. However, it's not the same always. It will
change based on the *indices* and therefore combinations will be lesser if we break out of the loop
after a successful match.

Now let's do a quick analysis of our 1^{st} solution.

We know that first approach is bad 😕 but can we improve algorithm and make it a bit faster?
What happens if we first *sort* the array huh 🤔?

# Second Approach

There's a second way of solving this problem. And it's *slightly* better than the first one.
Initially, in the challenge statement, I didn't mention whether the array is sorted or not. So,
what if we sort the array first in ascending order and then figure out a way to solve this?

Program Input— Say we are given a new array`[ -4, 13, 1, 3, 5, 6, -1, 11 ]`

and a target sum of`10`

. Let's use these inputs for our 2^{nd}approach.

#### 1^{st} Operation

First, we have to sort the array in ascending order^{*}. In order to algorithm to work this must be done
and then only we can continue.

#### 2^{nd} Operation

Then we can allocate two pointers from *left* and *right* to walk through the
elements $n - 1$ times and operate on these two numbers.

This way we can solve the problem more optimally instead of using *two for loops*. With
a reasonable sorting algorithm like *mergesort* or *quicksort* we could sort the array
in $n \ log(n)$ time. But remember we still have to walk through our $n - 1$ times which
is equivalent to $o(n)$.

#### 3^{rd} Operation (doing the summation)

So far, now we know the array must be sorted first, and we need two pointers to compare. The core logic of this approach is to drive algorithm's state using three predicates. We need to check whether the sum of $A + B$: -

- Is it
**equal**to target sum? - Is it
**less**than target sum? - Is it
**greater**than target sum?

Let's try to write down the algorithm. Remember that, up to this point, we assume that we have already sorted the array and allocated the two pointers. Now it is time to evaluate the above conditions against each pair in every $n$'th iteration.

Our loop starts from $x = 0$ and $y = 7$. At this point our $x$'s element is $-4$ and
$y$'s element is $13$ (see figure 4).
If we add up those two numbers together, we get a total of $9$ which is
*less* than our target sum $10$. In this case **we move the $x$ pointer to the right side**.
Basically, incrementing $x$'s index by $1$. That way we can guarantee that in next
iteration we would always get a sum $\gt 9$.

Alright, in the last iteration we moved $x$ by $1$ and now we are at $x = 1$ and
$y = 7$ (see figure 5).
Once again, if we sum up $-1$ and $13$ we get a total of $12$. Now, this is *larger*
than our expected target sum. In this case **we move the $y$ pointer to left side**.
Saying that we want to decrement our $y$ pointer by $1$.

Got the point? we do this iteratively until matching the target sum or until $x$ and $y$ meets together at the same index.

Well, would you look at that? we reduced the number of iterations we have to go through! we have accomplished significant progress in making this algorithm a bit faster. Now that we found our number pair, we can finally return the result and halt the algorithm. Let's write the JavaScript code for this algorithm now.

While this approach is slightly better than the first, we are back to square one. Why? well, it's the same reason as before; it does not scale well enough for larger arrays. Let me show you the problem.

The algorithm we wrote runs in linearithmic time which tells us its complexity grows proportionally to the array input size with a logarithmic factor. What can we do about it, huh? Can we solve it in linear time?

# Dynamic Approach

Up until now, all the approaches we have taken is not very optimal from a time standpoint. Fortunately, there's one other way of solving this problem in a much cooler way. You might have thought about this already from the previous approach. But first, let's list down the things we already know: -

- We know what's our target sum is (let it be $z$).
- We already know one of our addends
^{*}(let it be $x$).

So, basically we have two variables at hand before even doing any operations. So, we could write
an equation like $\large{x + y = z}$ to represent it (where $y$ is unknown). Where we can
*isolate the unknown* variable. Say for example, $\normalsize{(x + y) = z}$ $\normalsize{\iff}$
$\normalsize{y = (z - x)}$

Now we can find the unknown variable $y$ without any combinations or two pointers. The only caveat is that we need a way memorize this calculated value as we go through the array.

## Solution

Using some extra space is okay as long as it's complexity grows in a reasonable size. Now what do you think?
for our solution should we use a *hashmap*? what about a *set*?

You'd see a lot of examples of two summation problem's dynamic approach in the internet uses a hashmap auxiliary space implementation. But we really do not need key-value pairs for our solution. Instead, simply we can use a set of numbers to track the inversion results.

### Formal Proof

### Elaboration

We need a loop $\sum_{i=0}^{n - 1}$ that goes through each element of the array starting from index $0$.
We need to calculate the inverse set within the loop^{*} so, we create an empty set $R = \emptyset$ and then for each element we
calculate $y = z - A_x$^{*} and now we can place a predicate that checks $A_x$
existence in inverse set $R$ like $P(x): (A_x \in R)$ then return $\{ A_x, y \}$ if $P(x)$ is true. Otherwise,
$\neg P(x)$ we union our inverse set with the calculated $y$ value where $R \gets R \ \cup \{ y \}$ and we keep
on looping until $n - 1$.

Switching to New Inputs— For this approach let's use the array`[ -7, -5, -3, -1, 0, 1, 3, 5, 7 ]`

and the target sum`-5`

.

^{st}iteration

As illustrated, in the first iteration we start off with a empty set named $R$. Initially our loop starts from index $0$ where $i$ is index variable. In the first iteration we don't have any elements. So, therefore we immediately add the calculated value $2 = -5 - (-7)$ to the set $R$ and move on the next element.

^{nd}iteration

In the second iteration, first we check whether element at index $1$ an element of $R$. We can see that $-5 \notin R$ so, we add our inverse calculation $0 = -5 - (-5)$ to set $R$ and continue...

^{rd}iteration

In the third iteration, again we check whether element at index $2$ an element of $R$. We can see that $-3 \notin R$ so, we do our inverse calculation $-2 = -5 - (-3)$ and add it to set $R$ and continue.

^{th}iteration

Woah! fourth iteration already? again we check whether element at index $3$ an element of $R$. We can see that $-1 \notin R$ so, we do our inverse calculation $-4 = -5 - (-1)$ and add it to set $R$ and continue.

^{th}iteration

We are in the fifth iteration! and would you look at that! we just found $0$ in our set $R$. This means our inverse got a match! Now we can return these two elements like $\{ A_x, -5 - A_x \}$ where $A_x$ is $0$.

Woohoo now we have idea on how it works, let's write the pseudocode.

## Time & Space Complexity

In this approach we sorely rely on dynamic programming techniques. And we were able to solve it $o(n)$ time and $o(n)$ space complexity. This is the optimal way of solving this problem.

# Summary

Overall, I think even though two summation is a very easy challenge, we can learn a lot from it. How simple algebraic equations can help to solve complex problems more elegantly.

Until next time. Thanks for reading!

# Well, now what?

You can navigate to more writings from here. Connect with me on LinkedIn for a chat.