🎛
Controls
Algorithm Mode
FFD: Sort items largest-first, then place each in the first bin with enough space. Guaranteed ≤ (11/9)·OPT bins.
Preset Examples
Custom Items (sizes 0–1, space-separated)
Bin Capacity
Playback
Speed
Step 0 / 0
Idle
Live Statistics
Bins Used 0
Items Placed 0
Operations 0
Avg Waste
Approx. Ratio (Bins / OPT)
PSEUDOCODE FFD
💡 Tip: Switch to the Visualizer tab to see step explanations next to the animation. Click any row in the history table to jump to that step.
📊
Interactive Bin Packing Explorer

🛠 How to Use This Tool

Left panel controls the algorithm. Right panel shows you what's happening.

Step 1: Choose an algorithm from the mode buttons — start with FFD.
Step 2: Pick a preset graph or type your own item sizes in the box.
Step 3: Click Build Steps — this generates the full animation.
Step 4: Click one step at a time, or ▶▶ to play automatically.
Step 5: Watch the Step Explanation box below — it tells you in plain English exactly what just happened and why.
Step 6: Watch the Pseudocode panel — the active line highlights as each step runs.

The button goes backwards. The button resets everything. The speed slider controls how fast the animation plays.

The right panel tabs are: Learn (here), Visualizer (animation), Compare (all 4 algorithms side by side), NP-Complete (the proof), Complexity (charts).

💡 Best approach: Read sections ①–④ here first. Then click the Visualizer tab, load the Balanced preset, click Build Steps and step through slowly. Come back here if anything is confusing.

① What is Bin Packing?

Imagine you're moving house. You have boxes of different sizes and you want to pack everything into the fewest moving trucks possible. Each truck has a fixed capacity. You want to waste as little truck space as possible. That's Bin Packing.

Formally: you have n items, each with a size between 0 and 1. You have bins, each with capacity 1. Your goal is to pack all items into the fewest bins possible.

Example: Items = [0.4, 0.6, 0.3, 0.7]. Capacity = 1.0. Can they fit in 2 bins?

Bin 1: 0.7 + 0.3 = 1.0 ✓ (perfectly full)
Bin 2: 0.6 + 0.4 = 1.0 ✓ (perfectly full)
Yes! 2 bins, no waste at all.

Why is this hard? With 10 items and 4 bins there are 4¹⁰ = over a million ways to assign items. With 20 items it's billions. You can't try every combination — it takes exponential time. This is why Bin Packing is NP-complete (explained in the NP-Complete tab).

💡 Real life uses: Loading shipping containers (minimize containers), cutting fabric rolls (minimize waste), cloud server allocation (fit workloads into fewest servers), compiler memory allocation.

➜ Try it: Visualizer tab → Balanced preset → Build Steps → watch items pack into bins.

② The Four Algorithms

First-Fit Decreasing (FFD) — the recommended approach:
Sort items from largest to smallest. Then for each item, scan the bins from left to right and place it in the first bin that has enough space. If nothing fits, open a new bin.
Why sorting first? Large items are hardest to place. Placing them first lets them claim space cleanly. Small items then fill the leftover gaps perfectly.
Guarantee: uses at most (11/9)·OPT + 6/9 bins — never more than ~22% above optimal.

First-Fit (FF) — simpler but worse:
Same as FFD but without sorting. Items are placed in their original order. Because large items might arrive late, early small items scatter across bins leaving gaps too small for large items — forcing extra bins to open.

Best-Fit (BF) — tightest packing:
For each item, scan all bins and place it in the bin with the least remaining space that still fits. This leaves larger gaps available for larger items later. Often performs similarly to FFD but without sorting.

Brute Force — exact but slow:
Try every possible way to assign items to bins. Return the assignment that uses the fewest bins. Always finds the true optimal — but exponential time O(kⁿ). Only practical for about 8 items maximum.

💡 Key insight: Since Bin Packing is NP-complete, we can't solve it exactly in polynomial time. FFD is the practical solution — fast (O(n log n)) and provably close to optimal.

➜ Try it: Switch between modes in the left panel and run the same preset. Watch how the bin count changes.

③ Worked Example — FFD Step by Step

Items: 0.7, 0.5, 0.4, 0.3, 0.2, 0.1. Capacity = 1.0. Already sorted largest-first.

Step 1: Place 0.7 → Bin 1 [used: 0.7, free: 0.3]
Step 2: Place 0.5 → Bin 1 free=0.3 < 0.5 ✗ → open Bin 2 [used: 0.5, free: 0.5]
Step 3: Place 0.4 → Bin 1 free=0.3 < 0.4 ✗ → Bin 2 free=0.5 ≥ 0.4 ✓ → Bin 2 [used: 0.9]
Step 4: Place 0.3 → Bin 1 free=0.3 ≥ 0.3 ✓ → Bin 1 [used: 1.0 FULL ✓]
Step 5: Place 0.2 → Bin 2 free=0.1 < 0.2 ✗ → open Bin 3 [used: 0.2]
Step 6: Place 0.1 → Bin 2 free=0.1 ≥ 0.1 ✓ → Bin 2 [used: 1.0 FULL ✓]
Result: 3 bins. OPT = 3. FFD found the optimal solution! ✓

Notice how step 4 placed 0.3 back into Bin 1 filling it perfectly. This only worked because 0.7 was placed first — if 0.3 had gone in first, the gap would've been wrong.

🎯 Verify yourself: Visualizer tab → FFD mode → Build Steps → step through slowly. Each step above matches exactly one click.

④ Why Sorting First Makes FFD Better Than First-Fit

Consider items: 0.51, 0.51, 0.51, 0.49, 0.49, 0.49. Capacity = 1.0. OPT = 3 bins (pair each 0.51 with a 0.49).

First-Fit (original order):

0.51 → Bin 1. 0.51 → Bin 1 free=0.49? No (0.49<0.51) → Bin 2. 0.51 → Bin 3.
0.49 → Bin 1 (0.51+0.49=1.0 ✓). 0.49 → Bin 2. 0.49 → Bin 3.
Result: 3 bins ✓ (happens to be optimal here)

Now try: 0.51, 0.49, 0.51, 0.49, 0.51, 0.49 (alternating order):

0.51 → Bin 1. 0.49 → Bin 1 (used: 1.0 FULL). 0.51 → Bin 2.
0.49 → Bin 2 (used: 1.0 FULL). 0.51 → Bin 3. 0.49 → Bin 3 (FULL).
Result: 3 bins ✓ — also optimal, but only by luck of alternating order.

The problem is First-Fit's result depends on the input order. In the worst case it uses (17/10)·OPT bins. FFD's sort step removes this dependence — no matter what order items arrive in, FFD always processes them largest-first, giving a consistent (11/9)·OPT guarantee.

💡 The core insight: Large items are constraints. Small items are flexible. Place constraints first, then fill gaps with flexible pieces. This is the greedy insight behind FFD.

➜ Try it: Compare tab → load Awkward preset → run all 4. FFD uses 4 bins, First-Fit and Best-Fit use 6. The 0.55s and 0.45s pair perfectly when sorted — but in original order they each go into separate bins leaving 0.45 gaps that 0.55s can't fit into.

⑤ NP-Completeness in Plain English

P = problems solvable fast (polynomial time — like O(n), O(n²)). Sorting, shortest path, searching — all in P.

NP = problems where solutions are checkable fast even if you can't find them fast. Bin Packing is in NP — if someone gives you a packing, you can check in O(n) whether it's valid. But finding the best packing takes exponential time.

NP-complete = the hardest problems in NP. They're all secretly connected — if you crack one, you crack all of them. Bin Packing, Sudoku, Travelling Salesman, Tetris (optimal) — all NP-complete.

What this means for us: Since Bin Packing is NP-complete, we use approximation algorithms — fast algorithms that are provably close to optimal. FFD is the classic example: runs in O(n log n) and guarantees at most (11/9)·OPT bins.

💡 NP does NOT mean "impossible." It means no polynomial-time exact algorithm is known. We can still solve small instances exactly (brute force) and large instances approximately (FFD).

➜ Go deeper: Switch to the NP-Complete tab for the full proof that Bin Packing is NP-complete.

⑥ The Approximation Ratio — (11/9)·OPT

The approximation ratio tells you how close a heuristic's answer is to the true optimal.

FFD's guarantee: FFD uses at most (11/9)·OPT + 6/9 bins.

What this means in practice: if the optimal solution uses 9 bins, FFD uses at most 11. That's never more than ~22% above optimal. For large OPT the +6/9 term becomes negligible.

This bound is tight — there exist specific inputs where FFD uses exactly 11/9 of optimal. It was proven by Johnson (1973) and confirmed by Simchi-Levi (1994).

The approximation ratio is displayed live in the stats panel as Bins Used / OPT. Watch it as you run different presets.

💡 Why this matters: The ratio is a mathematical guarantee — not just "usually good" but provably never worse than 22% above optimal, no matter what input you give it.

⑦ Common Mistakes — Don't Fall for These

❌ "FFD always gives the optimal solution" Wrong. FFD is an approximation — it guarantees at most (11/9)·OPT bins, but can exceed optimal. It just never exceeds it by more than ~22%.
❌ "First-Fit and FFD always give the same result" Wrong. FFD almost always uses fewer bins because sorting first lets large items claim space before small items create awkward gaps.
❌ "NP-complete means impossible to solve" Wrong. NP-complete means no polynomial-time exact algorithm is known. We can solve small instances with brute force and large instances approximately with FFD.
❌ "Bin Packing is hard because bins are small" Wrong. The hardness comes from the combinatorial explosion of possible assignments — kⁿ combinations for n items and k bins — not the bin size.
❌ "Best-Fit is always better than First-Fit" Not always. Best-Fit packs tighter locally but can sometimes create gaps that are hard to fill later. On sorted input (FFD-style) it performs similarly to FFD.

⑧ Glossary — Every Term Used in This Tool

Bin capacityMaximum total size of items that fit in one bin. Normalized to 1.0 in this tool.
Item sizeA number between 0 and 1 representing how much of a bin's capacity the item takes up.
OPTThe optimal (minimum possible) number of bins for a given set of items. The best any algorithm could do.
First-Fit (FF)Greedy: place each item in the first bin with enough space. No sorting. Simple but less efficient.
FFDFirst-Fit Decreasing. Sort items largest-first, then apply First-Fit. Approximation ratio (11/9)·OPT.
Best-FitPlace each item in the bin with the least remaining space that still fits. Tighter local packing.
Brute ForceTry every possible assignment. Finds exact OPT but takes O(kⁿ) exponential time. Only practical for n≤8.
Approximation ratioHow close a heuristic is to optimal. FFD's ratio is (11/9)·OPT + 6/9 — provably tight.
NP-completeIn NP AND at least as hard as every NP problem. No polynomial-time exact algorithm known.
Partition problemCan a set of integers be split into two equal-sum subsets? Used as the source problem in the NP-completeness proof.
Polynomial reductionTransform one problem into another in polynomial time to prove NP-hardness.
WasteUnused capacity across all bins. Lower waste = better packing. Displayed as average % per bin.
Greedy algorithmMakes the locally best decision at each step without looking ahead. FFD and First-Fit are both greedy.
CertificateA proposed solution — in Bin Packing, an assignment of items to bins. Used in the NP membership proof.
🎓 Step Explanation
Click Build Steps then to see a plain-English explanation of each step — with actual item sizes and bin numbers.
Items to Pack
Load a preset or build steps to see items.
Bins
No bins yet — click Build Steps then ▶
Waste heatmap: Low waste Medium High waste (colored empty space at top of each bin)
Step History (click any row to jump)
# Action Item Bin Remaining
Click ⚡ Run All 4 Algorithms & Compare in the left panel to see results here.

🧠 What Does NP-Complete Mean?

P = problems solvable fast (polynomial time). Sorting, shortest path — all in P.

NP = problems where a proposed solution can be checked fast, even if it can't be found fast. Bin Packing is in NP — if someone hands you a packing, you can verify it in O(n).

NP-complete = the hardest problems in NP. They're all secretly connected — if you solve one efficiently, you solve ALL of NP. Bin Packing, Sudoku, Tetris, Travelling Salesman — all NP-complete.

To prove a problem X is NP-complete you need exactly two things:

Show X is in NP — describe a certificate and show it can be verified in polynomial time.
Show X is NP-hard — take a known NP-complete problem and reduce it to X in polynomial time.

💡 NP does NOT mean impossible. It means no polynomial-time exact algorithm is known. We can still approximate with FFD — and that's exactly why approximation algorithms matter.
Step 1

Bin Packing ∈ NP — It Can Be Verified Fast

The certificate (proposed solution): an assignment of every item to a bin. For example: Item 1 → Bin 2, Item 2 → Bin 1, Item 3 → Bin 1, ...

The verifier checks four things, all in polynomial time:

① Every item assigned: scan the certificate, confirm every item has exactly one bin label. Cost: O(n).
② No bin exceeds capacity: for each bin, sum the sizes of all assigned items, verify total ≤ 1. Cost: O(n).
③ All sizes valid: confirm every item size sᵢ ∈ (0,1]. Cost: O(n).
④ Bins used ≤ k: count distinct bin labels, verify count ≤ k. Cost: O(n).

Total verification cost: O(n) — polynomial. ✓

The verifier never tries to find a good packing — it only checks that the given one is valid. Finding is hard. Checking is easy. That's NP.

Step 2

NP-Hard — Reduce from Partition

The Partition problem (known NP-complete): Given a set S = {a₁, a₂, …, aₙ} of positive integers with total sum 2T, can S be split into two subsets each summing to exactly T?

In plain English: You have a list of numbers. Their total is even. Can you split them into two groups with equal sums?
Example: S = {3, 3, 2, 2}, total = 10, T = 5. Split into {3,2} and {3,2} — both sum to 5. YES.

The construction — how we disguise Partition as Bin Packing:

In plain English: Shrink all numbers so they fit between 0 and 1 (divide by T). Then ask if they fit in 2 bins of size 1. That's it.

Formally:
① Compute T = (Σ aᵢ) / 2 — half the total sum.
② Normalize: create item sizes sᵢ = aᵢ / T for each aᵢ ∈ S. Now every sᵢ ∈ (0, 1].
③ Set k = 2 bins of capacity 1.
④ Ask: can all n normalized items fit in 2 bins?

This construction runs in O(n) — just one division per element. Polynomial. ✓

Input: S = {3, 3, 2, 2}   sum = 10   T = 10/2 = 5
Normalize (÷T): 3/5=0.6   3/5=0.6   2/5=0.4   2/5=0.4
Bin Packing: items={0.6, 0.6, 0.4, 0.4}, k=2, capacity=1
Step 3

Correctness — Forward Direction (Partition YES → Bin Packing YES)

In plain English: If the numbers can be split into two equal groups, then the shrunk versions fit perfectly into 2 bins — because each group adds up to exactly 1.0 after dividing by T.

Formally:
Assume Partition = YES. So ∃ subset A ⊆ S where Σ(aᵢ for i∈A) = T and B = S\A where Σ(aᵢ for i∈B) = T.

After normalizing:
Σ(sᵢ for i∈A) = Σ(aᵢ/T for i∈A) = (1/T) · Σ(aᵢ for i∈A) = T/T = 1
Σ(sᵢ for i∈B) = T/T = 1

Place all items from A into Bin 1 → total = 1.0 ≤ capacity 1. ✓
Place all items from B into Bin 2 → total = 1.0 ≤ capacity 1. ✓
All n items packed in k = 2 bins. Bin Packing = YES. ✓

A={3,2}=5=T → after ÷T: {0.6,0.4} → Bin1: 0.6+0.4=1.0 ✓
B={3,2}=5=T → after ÷T: {0.6,0.4} → Bin2: 0.6+0.4=1.0 ✓
Step 4

Correctness — Backward Direction (Bin Packing YES → Partition YES)

In plain English: If the items fit in 2 bins, both bins must be exactly full (because the total of all items is exactly 2.0, and two bins each holding at most 1.0 can only both reach 1.0 if they're both exactly full). Multiply back by T to get the partition.

Formally:
Assume Bin Packing = YES. Let A = items in Bin 1, B = items in Bin 2.

By capacity constraint:
Σ(sᵢ for i∈A) ≤ 1   and   Σ(sᵢ for i∈B) ≤ 1

Total of all normalized items:
Σ all sᵢ = Σ(aᵢ/T) = (1/T) · 2T = 2

Since both sums are ≤ 1 and must total exactly 2, each must equal exactly 1:
Σ(sᵢ for i∈A) = 1   and   Σ(sᵢ for i∈B) = 1

Denormalize (multiply by T):
Σ(aᵢ for i∈A) = 1 × T = T   ✓
Σ(aᵢ for i∈B) = 1 × T = T   ✓

Two subsets of S each summing to T. Partition = YES. ✓

Bin1=[0.6+0.4=1.0] × T=5 → Group1 sum = 5 = T ✓
Bin2=[0.6+0.4=1.0] × T=5 → Group2 sum = 5 = T ✓
→ Partition = YES ✓

Formal Conclusion: Partition is NP-complete (Karp, 1972). We constructed a polynomial-time reduction Partition ≤ₚ Bin Packing in O(n) time, with the same YES/NO answer. Since Partition is NP-complete and Bin Packing ∈ NP (Step 1), Bin Packing is NP-complete. ∎

Reduction Diagram — Concrete Example
Partition instance: S = {3, 3, 2, 2}  |  sum = 10  |  T = 5
↓ normalize: divide each by T=5
Bin Packing instance: items = {0.6, 0.6, 0.4, 0.4}  |  k=2 bins  |  capacity=1
↓ solve Bin Packing
Bin 1: [0.6 + 0.4 = 1.0 ✓]   Bin 2: [0.6 + 0.4 = 1.0 ✓] → YES — fits in 2 bins
↓ denormalize: multiply each by T=5
Subset A: {3, 2} = 5 = T ✓   Subset B: {3, 2} = 5 = T ✓ → Partition = YES ✓
Conclusion: Partition ≤ₚ Bin Packing in O(n). Since Partition is NP-complete (Karp, 1972) and Bin Packing ∈ NP, Bin Packing is NP-complete. ∎
Interactive Reduction — Try It Live

Partition Instance

S = {3, 3, 2, 2}
Sum = 10, T = 5
Question: can S split into two subsets each summing to 5?

Answer: YES → {3,2} and {3,2}

Equivalent Bin Packing

Items = {0.6, 0.6, 0.4, 0.4}
k = 2 bins, capacity = 1.0
Question: do all items fit in 2 bins?

Answer: YES → Bin1=[0.6+0.4] Bin2=[0.6+0.4]

Why This Proof Matters

It proves that Bin Packing is at least as hard as Partition. Since Partition has no known polynomial-time solution, neither does Bin Packing. This justifies using FFD as an approximation rather than searching for an exact algorithm.

The Polynomial Time Requirement

The reduction must run in polynomial time — otherwise it's cheating. Our construction does two things: compute T (O(n)) and divide each element by T (O(n)). Total: O(n). Well within polynomial. ✓

Time & Space Complexity — All Four Algorithms
Algorithm Time Complexity Space Optimal? Approximation Why
FFD O(n log n) O(n) ≈ OPT (11/9)·OPT + 6/9 Sort costs O(n log n). First-Fit scan is O(n²) naive but O(n log n) with ordered bins. Sort dominates.
First-Fit O(n²) O(n) No (17/10)·OPT Each item scans all open bins — O(n) per item × n items = O(n²). No sort needed.
Best-Fit O(n²) O(n) No ~(17/10)·OPT Scans ALL bins per item (not just first fit) — O(n) per item × n items = O(n²). Tighter locally but same asymptotic cost.
Brute Force O(kⁿ) O(n) Always ✓ 1.00× (exact) k bins × n items = kⁿ assignments to try. For n=20, k=10: 10²⁰ operations. Exponential — infeasible.
📌 What does (11/9)·OPT mean? The · means multiply. OPT = the true minimum bins any algorithm could possibly use. So (11/9)·OPT = 1.22 × OPT. If the perfect solution needs 9 bins, FFD uses at most 11. Never more than ~22% above optimal — guaranteed.
Best / Average / Worst Case — FFD
Case Time When it happens Why
Best Case O(n log n) All items identical size, or all items fit in one bin Sort still costs O(n log n). Each item placed in first bin immediately — O(1) per placement. Total dominated by sort.
Average Case O(n log n) Random item sizes, sparse bins Sort O(n log n). Average bin scan is short — most items find a fit quickly. With balanced BST for bin tracking: O(n log n) overall.
Worst Case O(n²) Each item scans all existing bins before fitting or opening new one Naive implementation: O(n) bin scan × n items = O(n²). Optimized with ordered data structure reduces to O(n log n).
📌 Why does the main table say O(n log n) but worst case says O(n²)? The O(n log n) assumes an optimized implementation using a balanced binary search tree to find fitting bins in O(log n). The O(n²) worst case is for the naive implementation that scans all bins one by one — which is what this tool actually runs. Both are correct depending on the implementation.
Growth Curve — Operations vs Number of Items
Max n: 15 FFD O(n log n) First-Fit / Best-Fit O(n²) Brute Force O(2ⁿ)
💡 How to read this chart: The Y-axis shows the number of operations the algorithm performs — higher means slower. The X-axis is the number of items n. Drag the slider to increase n and watch brute force explode while FFD stays nearly flat. At n=20, brute force performs over 1 million operations — at n=30 it would take longer than the age of the universe.
Complexity Derivations — How We Get These Numbers

FFD — Why O(n log n)

Step 1 — Sort: Sorting n items costs O(n log n). This is the dominant cost.
Step 2 — Placement: For each of the n items, find the first fitting bin. With a naive scan: O(n) per item = O(n²) total. With a balanced BST tracking bin capacities: O(log n) per item = O(n log n) total.
Total: O(n log n) with optimized implementation.

Brute Force — Why O(kⁿ)

Each of the n items can go into any of k bins. The number of possible assignments is k × k × k × … (n times) = kⁿ.
For n=10, k=4: 4¹⁰ = 1,048,576 assignments.
For n=20, k=10: 10²⁰ = 100,000,000,000,000,000,000 assignments.
At 10⁹ operations/second: that's 3×10¹⁰ years. This is why NP-complete problems need approximations.

The (11/9)·OPT Guarantee

Proven by Johnson (1973) and confirmed by Simchi-Levi (1994). If OPT = optimal bins needed, FFD uses at most (11/9)·OPT + 6/9 bins.
For large OPT the +6/9 becomes negligible: essentially FFD is within ~22% of optimal.
This bound is tight — specific inputs exist where FFD uses exactly 11/9 of OPT.

Space Complexity — All O(n)

All four algorithms only need to store:
• The n item sizes — O(n)
• Which bin each item is in — O(n)
• The remaining capacity per bin — O(k) ≤ O(n)
FFD also needs sorted order but that's still O(n). No algorithm needs more than linear space.