Two Sum
Problem Description
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice.
Examples
Input: nums = [2,7,11,15], target = 9
Output: [0,1]
Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].
Input: nums = [3,2,4], target = 6
Output: [1,2]
Input: nums = [3,3], target = 6
Output: [0,1]
Constraints
2 <= nums.length <= 10^4
-10^9 <= nums[i] <= 10^9
-10^9 <= target <= 10^9
Only one valid answer exists.
Follow-up
Can you come up with an algorithm that is less than O(n^2) time complexity?
Approach to Solve
Use a hash table to store the complement of each number as we iterate through the array. This allows us to find the solution in a single pass through the array.
Code Implementation
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
num_map = {}
for i, num in enumerate(nums):
complement = target - num
if complement in num_map:
return [num_map[complement], i]
num_map[num] = i
return [] # No solution found
Explanation
The Two Sum problem is solved efficiently using a hash table approach. Here's a detailed explanation of how the algorithm works:
We start by initializing an empty hash table (num_map in the code).
We then iterate through the input array nums once, from left to right. For each number, we perform the following steps:
a. Calculate the complement by subtracting the current number from the target: complement = target - nums[i]
b. Check if this complement exists in our hash table. If it does, we've found our solution because we've found two numbers that add up to the target.
c. If the complement is not in the hash table, we add the current number and its index to the hash table.
If we complete the iteration without finding a solution, we return an empty array (or throw an exception, depending on the problem requirements).
This approach is efficient because:
- We only need to traverse the array once, giving us a time complexity of O(n).
- Hash table operations (insertion and lookup) have an average time complexity of O(1).
- We use extra space to store the hash table, but this space is at most O(n) if we need to store all elements.
The trade-off here is using more space (for the hash table) to achieve better time complexity. Instead of checking every pair of numbers (which would be O(n^2)), we're using the hash table to do constant-time lookups, effectively reducing our time complexity to O(n).
It's worth noting that this method assumes that there is exactly one solution. If multiple solutions could exist, or if we needed to handle cases with no solution differently, we might need to modify the approach slightly.
The implementations provided in Python, Java, and C++ all follow this same logic, with minor syntax differences based on the language-specific data structures and conventions.
Complexity
- Time Complexity: O(n)
- Space Complexity: O(n)