We want to write a function that takes a non-empty array of distinct integers and an integer representing a target sum. If any two numbers in the input array sum up to the target sum, the function should return them in an array, in any order. If no two numbers sum up to the target sum, the process should return an empty array.
My first approach would be to go through the array of integers in a brute force manner.
Suppose we have an array of numbers
[ 1, 2, 3 ]. We need to figure out all the
two-element combinations it can have. If we think about it,
we would probably end up doing something like this: -
Conceptually in this approach, we try to achieve a reducing set of combinations for two numbers and do some calculation with it. If we can align this approach as a solution to our challenge statement, we can write a brute force algorithm like the following: -
- An outer loop which goes through each of element until
- An inner loop which goes through
- a condition to check the summation
Program Input — Say we have an array
[ -1, 5, -4, 8, 7, 1, 3, 11 ]and a target sum
14. Now let's transform above steps into a pseudocode.
If we were to execute this algorithm, what are the different combinations we go through? Let's write down the iterations and their respective combinations manually.
|1||(-1,5), (-1,-4), (-1,8), (-1,6), (-1,1), (-1,3), (-1,11)|
|2||(5,-4), (5,8), (5,6), (5,1), (5,3), (5,11)|
|3||(-4,8), (-4,6), (-4,1), (-4,3), (-4,11)|
|4||(8,6), (8,1), (8,3), (8,11)|
|5||(6,1), (6,3), (6,11)|
Did you see that? In worst-case scenario we had to evaluate pairs and in the th iteration we found our matching number pair . However, it's not the same always. It will change based on the indices and therefore combinations will be lesser if we break out of the loop after a successful match.
Now let's do a quick analysis of our 1st solution.
We know that first approach is bad 😕 but can we improve algorithm and make it a bit faster? What happens if we first sort the array huh 🤔?
There's a second way of solving this problem. And it's slightly better than the first one. Initially, in the challenge statement, I didn't mention whether the array is sorted or not. So, what if we sort the array first in ascending order and then figure out a way to solve this?
Program Input — Say we are given a new array
[ -4, 13, 1, 3, 5, 6, -1, 11 ]and a target sum of
10. Let's use these inputs for our 2nd approach.
First, we have to sort the array in ascending order*. In order to algorithm to work this must be done and then only we can continue.
Then we can allocate two pointers from left and right to walk through the elements times and operate on these two numbers.
This way we can solve the problem more optimally instead of using two for loops. With a reasonable sorting algorithm like mergesort or quicksort we could sort the array in time. But remember we still have to walk through our times which is equivalent to .
3rd Operation (doing the summation)
So far, now we know the array must be sorted first, and we need two pointers to compare. The core logic of this approach is to drive algorithm's state using three predicates. We need to check whether the sum of : -
- Is it equal to target sum?
- Is it less than target sum?
- Is it greater than target sum?
Let's try to write down the algorithm. Remember that, up to this point, we assume that we have already sorted the array and allocated the two pointers. Now it is time to evaluate the above conditions against each pair in every 'th iteration.
Our loop starts from and . At this point our 's element is and 's element is (see figure 4). If we add up those two numbers together, we get a total of which is less than our target sum . In this case we move the pointer to the right side. Basically, incrementing 's index by . That way we can guarantee that in next iteration we would always get a sum .
Alright, in the last iteration we moved by and now we are at and (see figure 5). Once again, if we sum up and we get a total of . Now, this is larger than our expected target sum. In this case we move the pointer to left side. Saying that we want to decrement our pointer by .
Got the point? we do this iteratively until matching the target sum or until and meets together at the same index.
While this approach is slightly better than the first, we are back to square one. Why? well, it's the same reason as before; it does not scale well enough for larger arrays. Let me show you the problem.
The algorithm we wrote runs in linearithmic time which tells us its complexity grows proportionally to the array input size with a logarithmic factor. What can we do about it, huh? Can we solve it in linear time?
Up until now, all the approaches we have taken is not very optimal from a time standpoint. Fortunately, there's one other way of solving this problem in a much cooler way. You might have thought about this already from the previous approach. But first, let's list down the things we already know: -
- We know what's our target sum is (let it be ).
- We already know one of our addends* (let it be ).
So, basically we have two variables at hand before even doing any operations. So, we could write an equation like to represent it (where is unknown). Where we can isolate the unknown variable. Say for example,
Now we can find the unknown variable without any combinations or two pointers. The only caveat is that we need a way memorize this calculated value as we go through the array.
Using some extra space is okay as long as it's complexity grows in a reasonable size. Now what do you think? for our solution should we use a hashmap? what about a set?
You'd see a lot of examples of two summation problem's dynamic approach in the internet uses a hashmap auxiliary space implementation. But we really do not need key-value pairs for our solution. Instead, simply we can use a set of numbers to track the inversion results.
We need a loop that goes through each element of the array starting from index . We need to calculate the inverse set within the loop* so, we create an empty set and then for each element we calculate * and now we can place a predicate that checks existence in inverse set like then return if is true. Otherwise, we union our inverse set with the calculated value where and we keep on looping until .
Switching to New Inputs — For this approach let's use the array
[ -7, -5, -3, -1, 0, 1, 3, 5, 7 ]and the target sum
As illustrated, in the first iteration we start off with a empty set named . Initially our loop starts from index where is index variable. In the first iteration we don't have any elements. So, therefore we immediately add the calculated value to the set and move on the next element.
In the second iteration, first we check whether element at index an element of . We can see that so, we add our inverse calculation to set and continue...
In the third iteration, again we check whether element at index an element of . We can see that so, we do our inverse calculation and add it to set and continue.
Woah! fourth iteration already? again we check whether element at index an element of . We can see that so, we do our inverse calculation and add it to set and continue.
We are in the fifth iteration! and would you look at that! we just found in our set . This means our inverse got a match! Now we can return these two elements like where is .
Woohoo now we have idea on how it works, let's write the pseudocode.
Time & Space Complexity
In this approach we sorely rely on dynamic programming techniques. And we were able to solve it time and space complexity. This is the optimal way of solving this problem.
Overall, I think even though two summation is a very easy challenge, we can learn a lot from it. How simple algebraic equations can help to solve complex problems more elegantly.
Until next time. Thanks for reading!
Well, now what?
You can navigate to more writings from here. Connect with me on LinkedIn for a chat.