Nuha Aburamadan | Interactive Algorithm Visualizer
Left panel controls the algorithm. Right panel shows you what's happening.
Step 1: Choose an algorithm from the mode buttons — start with FFD.
Step 2: Pick a preset graph or type your own item sizes in the box.
Step 3: Click Build Steps — this generates the full animation.
Step 4: Click ▶ one step at a time, or ▶▶ to play automatically.
Step 5: Watch the Step Explanation box below — it tells you in plain English exactly what just happened and why.
Step 6: Watch the Pseudocode panel — the active line highlights as each step runs.
The ◀ button goes backwards. The ↺ button resets everything. The speed slider controls how fast the animation plays.
The right panel tabs are: Learn (here), Visualizer (animation), Compare (all 4 algorithms side by side), NP-Complete (the proof), Complexity (charts).
Imagine you're moving house. You have boxes of different sizes and you want to pack everything into the fewest moving trucks possible. Each truck has a fixed capacity. You want to waste as little truck space as possible. That's Bin Packing.
Formally: you have n items, each with a size between 0 and 1. You have bins, each with capacity 1. Your goal is to pack all items into the fewest bins possible.
Example: Items = [0.4, 0.6, 0.3, 0.7]. Capacity = 1.0. Can they fit in 2 bins?
Why is this hard? With 10 items and 4 bins there are 4¹⁰ = over a million ways to assign items. With 20 items it's billions. You can't try every combination — it takes exponential time. This is why Bin Packing is NP-complete (explained in the NP-Complete tab).
➜ Try it: Visualizer tab → Balanced preset → Build Steps → watch items pack into bins.
First-Fit Decreasing (FFD) — the recommended approach:
Sort items from largest to smallest. Then for each item, scan the bins from left to right and place it in the first bin that has enough space. If nothing fits, open a new bin.
Why sorting first? Large items are hardest to place. Placing them first lets them claim space cleanly. Small items then fill the leftover gaps perfectly.
Guarantee: uses at most (11/9)·OPT + 6/9 bins — never more than ~22% above optimal.
First-Fit (FF) — simpler but worse:
Same as FFD but without sorting. Items are placed in their original order. Because large items might arrive late, early small items scatter across bins leaving gaps too small for large items — forcing extra bins to open.
Best-Fit (BF) — tightest packing:
For each item, scan all bins and place it in the bin with the least remaining space that still fits. This leaves larger gaps available for larger items later. Often performs similarly to FFD but without sorting.
Brute Force — exact but slow:
Try every possible way to assign items to bins. Return the assignment that uses the fewest bins. Always finds the true optimal — but exponential time O(kⁿ). Only practical for about 8 items maximum.
➜ Try it: Switch between modes in the left panel and run the same preset. Watch how the bin count changes.
Items: 0.7, 0.5, 0.4, 0.3, 0.2, 0.1. Capacity = 1.0. Already sorted largest-first.
Notice how step 4 placed 0.3 back into Bin 1 filling it perfectly. This only worked because 0.7 was placed first — if 0.3 had gone in first, the gap would've been wrong.
Consider items: 0.51, 0.51, 0.51, 0.49, 0.49, 0.49. Capacity = 1.0. OPT = 3 bins (pair each 0.51 with a 0.49).
First-Fit (original order):
Now try: 0.51, 0.49, 0.51, 0.49, 0.51, 0.49 (alternating order):
The problem is First-Fit's result depends on the input order. In the worst case it uses (17/10)·OPT bins. FFD's sort step removes this dependence — no matter what order items arrive in, FFD always processes them largest-first, giving a consistent (11/9)·OPT guarantee.
➜ Try it: Compare tab → load Awkward preset → run all 4. FFD uses 4 bins, First-Fit and Best-Fit use 6. The 0.55s and 0.45s pair perfectly when sorted — but in original order they each go into separate bins leaving 0.45 gaps that 0.55s can't fit into.
P = problems solvable fast (polynomial time — like O(n), O(n²)). Sorting, shortest path, searching — all in P.
NP = problems where solutions are checkable fast even if you can't find them fast. Bin Packing is in NP — if someone gives you a packing, you can check in O(n) whether it's valid. But finding the best packing takes exponential time.
NP-complete = the hardest problems in NP. They're all secretly connected — if you crack one, you crack all of them. Bin Packing, Sudoku, Travelling Salesman, Tetris (optimal) — all NP-complete.
What this means for us: Since Bin Packing is NP-complete, we use approximation algorithms — fast algorithms that are provably close to optimal. FFD is the classic example: runs in O(n log n) and guarantees at most (11/9)·OPT bins.
➜ Go deeper: Switch to the NP-Complete tab for the full proof that Bin Packing is NP-complete.
The approximation ratio tells you how close a heuristic's answer is to the true optimal.
FFD's guarantee: FFD uses at most (11/9)·OPT + 6/9 bins.
What this means in practice: if the optimal solution uses 9 bins, FFD uses at most 11. That's never more than ~22% above optimal. For large OPT the +6/9 term becomes negligible.
This bound is tight — there exist specific inputs where FFD uses exactly 11/9 of optimal. It was proven by Johnson (1973) and confirmed by Simchi-Levi (1994).
The approximation ratio is displayed live in the stats panel as Bins Used / OPT. Watch it as you run different presets.
| # | Action | Item | Bin | Remaining |
|---|
P = problems solvable fast (polynomial time). Sorting, shortest path — all in P.
NP = problems where a proposed solution can be checked fast, even if it can't be found fast. Bin Packing is in NP — if someone hands you a packing, you can verify it in O(n).
NP-complete = the hardest problems in NP. They're all secretly connected — if you solve one efficiently, you solve ALL of NP. Bin Packing, Sudoku, Tetris, Travelling Salesman — all NP-complete.
To prove a problem X is NP-complete you need exactly two things:
① Show X is in NP — describe a certificate and show it can be verified in polynomial time.
② Show X is NP-hard — take a known NP-complete problem and reduce it to X in polynomial time.
The certificate (proposed solution): an assignment of every item to a bin. For example: Item 1 → Bin 2, Item 2 → Bin 1, Item 3 → Bin 1, ...
The verifier checks four things, all in polynomial time:
① Every item assigned: scan the certificate, confirm every item has exactly one bin label. Cost: O(n).
② No bin exceeds capacity: for each bin, sum the sizes of all assigned items, verify total ≤ 1. Cost: O(n).
③ All sizes valid: confirm every item size sᵢ ∈ (0,1]. Cost: O(n).
④ Bins used ≤ k: count distinct bin labels, verify count ≤ k. Cost: O(n).
Total verification cost: O(n) — polynomial. ✓
The verifier never tries to find a good packing — it only checks that the given one is valid. Finding is hard. Checking is easy. That's NP.
The Partition problem (known NP-complete): Given a set S = {a₁, a₂, …, aₙ} of positive integers with total sum 2T, can S be split into two subsets each summing to exactly T?
In plain English: You have a list of numbers. Their total is even. Can you split them into two groups with equal sums?
Example: S = {3, 3, 2, 2}, total = 10, T = 5. Split into {3,2} and {3,2} — both sum to 5. YES.
The construction — how we disguise Partition as Bin Packing:
In plain English: Shrink all numbers so they fit between 0 and 1 (divide by T). Then ask if they fit in 2 bins of size 1. That's it.
Formally:
① Compute T = (Σ aᵢ) / 2 — half the total sum.
② Normalize: create item sizes sᵢ = aᵢ / T for each aᵢ ∈ S. Now every sᵢ ∈ (0, 1].
③ Set k = 2 bins of capacity 1.
④ Ask: can all n normalized items fit in 2 bins?
This construction runs in O(n) — just one division per element. Polynomial. ✓
In plain English: If the numbers can be split into two equal groups, then the shrunk versions fit perfectly into 2 bins — because each group adds up to exactly 1.0 after dividing by T.
Formally:
Assume Partition = YES. So ∃ subset A ⊆ S where Σ(aᵢ for i∈A) = T and B = S\A where Σ(aᵢ for i∈B) = T.
After normalizing:
Σ(sᵢ for i∈A) = Σ(aᵢ/T for i∈A) = (1/T) · Σ(aᵢ for i∈A) = T/T = 1
Σ(sᵢ for i∈B) = T/T = 1
Place all items from A into Bin 1 → total = 1.0 ≤ capacity 1. ✓
Place all items from B into Bin 2 → total = 1.0 ≤ capacity 1. ✓
All n items packed in k = 2 bins. Bin Packing = YES. ✓
In plain English: If the items fit in 2 bins, both bins must be exactly full (because the total of all items is exactly 2.0, and two bins each holding at most 1.0 can only both reach 1.0 if they're both exactly full). Multiply back by T to get the partition.
Formally:
Assume Bin Packing = YES. Let A = items in Bin 1, B = items in Bin 2.
By capacity constraint:
Σ(sᵢ for i∈A) ≤ 1 and Σ(sᵢ for i∈B) ≤ 1
Total of all normalized items:
Σ all sᵢ = Σ(aᵢ/T) = (1/T) · 2T = 2
Since both sums are ≤ 1 and must total exactly 2, each must equal exactly 1:
Σ(sᵢ for i∈A) = 1 and Σ(sᵢ for i∈B) = 1
Denormalize (multiply by T):
Σ(aᵢ for i∈A) = 1 × T = T ✓
Σ(aᵢ for i∈B) = 1 × T = T ✓
Two subsets of S each summing to T. Partition = YES. ✓
Formal Conclusion: Partition is NP-complete (Karp, 1972). We constructed a polynomial-time reduction Partition ≤ₚ Bin Packing in O(n) time, with the same YES/NO answer. Since Partition is NP-complete and Bin Packing ∈ NP (Step 1), Bin Packing is NP-complete. ∎
S = {3, 3, 2, 2}
Sum = 10, T = 5
Question: can S split into two subsets each summing to 5?
Answer: YES → {3,2} and {3,2}
Items = {0.6, 0.6, 0.4, 0.4}
k = 2 bins, capacity = 1.0
Question: do all items fit in 2 bins?
Answer: YES → Bin1=[0.6+0.4] Bin2=[0.6+0.4]
It proves that Bin Packing is at least as hard as Partition. Since Partition has no known polynomial-time solution, neither does Bin Packing. This justifies using FFD as an approximation rather than searching for an exact algorithm.
The reduction must run in polynomial time — otherwise it's cheating. Our construction does two things: compute T (O(n)) and divide each element by T (O(n)). Total: O(n). Well within polynomial. ✓
| Algorithm | Time Complexity | Space | Optimal? | Approximation | Why |
|---|---|---|---|---|---|
| FFD | O(n log n) | O(n) | ≈ OPT | (11/9)·OPT + 6/9 | Sort costs O(n log n). First-Fit scan is O(n²) naive but O(n log n) with ordered bins. Sort dominates. |
| First-Fit | O(n²) | O(n) | No | (17/10)·OPT | Each item scans all open bins — O(n) per item × n items = O(n²). No sort needed. |
| Best-Fit | O(n²) | O(n) | No | ~(17/10)·OPT | Scans ALL bins per item (not just first fit) — O(n) per item × n items = O(n²). Tighter locally but same asymptotic cost. |
| Brute Force | O(kⁿ) | O(n) | Always ✓ | 1.00× (exact) | k bins × n items = kⁿ assignments to try. For n=20, k=10: 10²⁰ operations. Exponential — infeasible. |
| Case | Time | When it happens | Why |
|---|---|---|---|
| Best Case | O(n log n) | All items identical size, or all items fit in one bin | Sort still costs O(n log n). Each item placed in first bin immediately — O(1) per placement. Total dominated by sort. |
| Average Case | O(n log n) | Random item sizes, sparse bins | Sort O(n log n). Average bin scan is short — most items find a fit quickly. With balanced BST for bin tracking: O(n log n) overall. |
| Worst Case | O(n²) | Each item scans all existing bins before fitting or opening new one | Naive implementation: O(n) bin scan × n items = O(n²). Optimized with ordered data structure reduces to O(n log n). |
Step 1 — Sort: Sorting n items costs O(n log n). This is the dominant cost.
Step 2 — Placement: For each of the n items, find the first fitting bin. With a naive scan: O(n) per item = O(n²) total. With a balanced BST tracking bin capacities: O(log n) per item = O(n log n) total.
Total: O(n log n) with optimized implementation.
Each of the n items can go into any of k bins. The number of possible assignments is k × k × k × … (n times) = kⁿ.
For n=10, k=4: 4¹⁰ = 1,048,576 assignments.
For n=20, k=10: 10²⁰ = 100,000,000,000,000,000,000 assignments.
At 10⁹ operations/second: that's 3×10¹⁰ years. This is why NP-complete problems need approximations.
Proven by Johnson (1973) and confirmed by Simchi-Levi (1994). If OPT = optimal bins needed, FFD uses at most (11/9)·OPT + 6/9 bins.
For large OPT the +6/9 becomes negligible: essentially FFD is within ~22% of optimal.
This bound is tight — specific inputs exist where FFD uses exactly 11/9 of OPT.
All four algorithms only need to store:
• The n item sizes — O(n)
• Which bin each item is in — O(n)
• The remaining capacity per bin — O(k) ≤ O(n)
FFD also needs sorted order but that's still O(n). No algorithm needs more than linear space.