Think your First Pass Yield looks solid?
It might be wrong.
Why Most Yield Calculations Are Wrong
Yield is one of the most closely watched metrics in electronics manufacturing while also being one of the most often misunderstood. Ask a room of engineers and managers how they calculate it and you'll get different answers, sometimes from people working in the same team.
The terminology overlaps, the methods vary, and the result is that a lot of companies are making decisions based on a yield figure that isn't telling them the whole story.
There are two yield metrics you must understand to get an accurate picture of your production. Each answers a different question, and confusing them leads directly to misleading conclusions.
This article will walk through both methods clearly, including the concept of “first seen in process” that catches a lot of experienced manufacturers off guard the first time they come across it.
If you'd prefer to see this explained visually before reading on, the video below covers the core concepts in full.

Unit vs Test Report Yield: What They Actually Tell You
There are two yield metrics you need to consider when analyzing test data: Unit Level Yield and Test Report Yield. They differ in both how they are calculated and what they tell you.
Test Report Yield: Station Performance
The simplest yield to calculate is the test report yield. This takes all passing test reports and divides them by all test reports. It doesn't take the unit's serial number into account, which makes it a straightforward measure of how a test station is performing at any given moment.
If you want to understand throughput, identify a struggling station, or compare performance across shifts, test report yield gives you a reliable signal.
- Passed reports / all reports
- Good for measuring a station’s performance
Unit Level Yield: Product Performance
Unit level yield takes the serial number into account, making it the most important yield metric when you want to understand what is actually happening to your products.
Rather than counting test reports, it tracks each unit across its entire journey through testing - not just whether a test passed, but how many attempts it took to get there.
- Based on the unit’s serial number
- Differentiates yield across test runs
- Reflects actual product performance, not just station activity
If you haven’t encountered unit level yield in your own systems before, the next section is an important explainer.
Understanding Unit Level Yield Metrics
Because unit level yield tracks by serial number, it can differentiate between the runs a unit goes through. This is what makes First Pass Yield (FPY), Second Pass Yield (SPY), Third Pass Yield (TPY) and Last Pass Yield (LPY) meaningful as distinct metrics.First Pass Yield tells you how many units passed on their very first attempt. Second Pass Yield tells you how many passed within two attempts and so on. Then there's Last Pass Yield, which tells you how many units eventually passed at any point in the testing process.
The Most Common Mistake in Unit Yield Calculation
The most common mistake when calculating unit level yield is applying a time filter incorrectly.
Many teams simply include all tests within a selected time period. This feels intuitive but it produces misleading results.
To calculate unit level yield correctly, you must:
- Only include units first seen in the process within the time filter
- Include all subsequent runs for those units - even outside the time window
If you don’t:
- FPY might appear higher than reality
- Retests are underestimated
- Problems are detected too late
How to Calculate Unit Level Yield
Unit level yield is often calculated incorrectly because it’s counterintuitive. The best way to understand how these metrics build is to walk through a real example step by step.
Step-by-Step Example
In the picture below SN001 fails three times before passing on its fourth run. LPY is 100% because it eventually passed, but FPY, SPY and TPY are all 0% because no unit has yet passed within one, two or three attempts.

In the picture below a new unit is added, SN002, and it passes on its third run. TPY moves to 50% because one of two units passed within three attempts. FPY and SPY remain at 0%. LPY stays at 100%.

A third unit is introduced, SN003, which passes on its second run. One of three units has now passed within two attempts, so SPY becomes 33.3%. TPY moves to 66.6%. LPY remains 100%.

The next unit, SN004 passes on its very first run. FPY is now 25% since one of four units passed first time. We now have two units which passed within two attempts so SPY moves to 50%, TPY to 75%, LPY holds at 100%.
When calculating for example TPY you include all units which passed either in first, second or third run. An indicator that something is wrong with your numbers is if your FPY is higher than your SPY and your TPY is higher than your SPY.

Unit number five, SN005 fails twice, then passes. All five units have eventually passed so LPY remains 100%. FPY moves to 20% (1/5), SPY to 40% (2/5), TPY to 80% (4/5).

The last unit in this example, SN006 has been tested once and failed. I does not pass within this time period. That single unresolved unit immediately pulls LPY down to 83.3%. Five of six units have eventually passed, but the sixth hasn't yet.

That last step is worth pausing on. One unit, one failed run, and Last Pass Yield is no longer the clean 100% that might have made everything look fine.
Without unit level yield, SN006 would appear as nothing more than a failed test report among many so it’s easy to overlook and hard to trace.
When your calculation follows the serial number through the process, it identifies SN006 as an unresolved unit that hasn’t yet reached a passing state - not just a failed test.
That’s the difference between knowing something failed and knowing something is unfinished.
Why Time Filtering Leads to Incorrect Yield
When applying a time filter, the intuitive assumption is that it captures all test reports within that time period. For test report yield, that assumption holds true.
But for unit level yield, it does not.
This is where a lot of manufacturers quietly get their numbers wrong, and where the difference between test report yield and unit level yield becomes most apparent.
The Mistake: Including All Tests in the Time Window
Consider what you actually need to calculate a meaningful unit yield figure. Take Third Pass Yield as an example. It measures how many units passed within three runs, which means you need to know the first run for each unit and all subsequent runs that follow.
If you simply include all test reports within a time window you might be looking at the second or third run of a unit whose first run happened before that window opened. The consequences are:
- You may include partial histories
- You may exclude critical earlier runs
- Your yield calculation becomes distorted
The Correct Approach: First seen in Process
To calculate unit level yield correctly, you must base your time filter on when a unit first enters the process and not when individual tests occur.
This means:
- Identify units whose first run falls within the selected period
- Include all data for those units, even outside the time window
Systems designed for proper yield analysis handle this by tracking units from their first occurrence and linking all subsequent test runs.

Unit Level Yield: The Foundation for Root Cause Analysis
Measuring yield without taking serial numbers into account gives you station performance. That is genuinely useful, but it won't tell you what's actually happening to your products.
A unit that fails and is retested three times before passing won't damage your test report yield in the way it damages your business. The time it consumes, the operator resource, the risk that something is being masked by repeated attempts - none of that surfaces in a report-level number.
Unit level yield, calculated correctly, brings it into view. For test engineers, this is the foundation of meaningful root cause analysis. For quality managers and production leads, it's the difference between a dashboard that looks good and one that drives improvement.
Most manufacturers who revisit how they calculate yield discover that their numbers don’t match reality.
Getting yield right starts with getting the data right. When your data reflects the full journey of each unit, it becomes the foundation for accurate yield metrics, effective root cause analysis, and meaningful insight into what’s actually happening in production.