Taguchi Loss Function

genichi_taguchi.jpgSaying the words “Genichi Taguchi” to a hard core “western statistician” may get you some dirty looks. Actually, some of these crazy statisticians may want to strike you for saying this person’s name. Why the hate you may ask?  Good question.

Let me give you my take on it. Genichi Taguchi is a Japanese engineer and (close your eyes western stats geeks) statistician.

Hard core statistics nerds (most of whom never actually make it to the gemba) will tell you how Mr. Taguchi’s methods break all kinds of rules. They will spout out words like orthogonality, confounding, and all kinds of other gibberish in hopes you will turn your back on Taguchi methods.

Ignore them!

Fear not friends… those statistics friends of ours should be ignored and allowed to study their monitors all day long leaving everyone else alone. I am hear to tell you that Taguchi methods rock and I have used them many, many times successfully. My favorite Taguchi DOE is the L18. I will blog more about this in the future.

Tonight I want to introduce a key concept Taguchi teaches known as the “Loss Function.” It is at the core of all Taguchi methods and must be understood.

Traditional Bell Curve

Let’s start with our traditional “bell curve” approach to defects. Typically we see people draw in upper and lower specification limits (customer requirements). We then see a bell curve drawn in between these specification limits. If a data point falls outside a spec limit we have a defect. If all the data points are between the spec limits there are no defects. Simple as that, right?

Sort of.

Brain Surgeon Final Exams

Say you need to have brain surgery (I pray this never happens by the way). With something so serious it’s safe to assume you would want a top notch surgeon, right? Of course you would. But guess how surgeons get the right to slice into your head? They take exams, bunches of them, in medical school.

Imagine two nice fellows, Bob and Ted, are going through brain surgeon school together. Now imagine they are sitting for their FINAL exam. If they pass this exam they have the right to slice your head open.  After much study Bob and Ted take their exams.

Bob scores a 61% and Ted scores a 59%. Bob is celebrating and sharpening his scalpel as he “passed” the exam. Ted, poor guy, flunked and is looking into this new methodology called MVT as he hears it is replacing Six Sigma… this medical school stuff just isn’t working out for old Ted.

Is There Really a Difference?

But, really, is there really much difference between Bob’s knowledge and Ted’s knowledge? Not likely. Instead, what probably happened is Bob guessed right a few more times than Ted and earned the right to be a brain surgeon. Since his test score was “between” specification limits he passed. And since Ted’s score was outside the spec limit he is sent packing.

Enter the Loss Function

Genichi Taguchi realized this and hated it. So, he decided to turn that bell curve on its head – literally.

Taguchi said that having specification limits was all well and good. But what he wanted people focusing on was the “target” value. He stated that the further we drift away from the target value the more it costs the company. We want to aim for the target while doing all we can to reduce variation. The spec limits don’t get too much of our focus since we only want to nail that target and don’t stop until this is our reality.

This is not contradictory to what Six Sigma teaches. Six Sigma also aims to reduce variation while centering our process about the target. But if a Six Sigma practitioner ever becomes disillusioned with the fact that simply staying between the specifications limits is our goal explain the story of Bob the brain surgeon with the very expensive malpractice premium.

I will write more on Taguchi methods in the future. There are some really slick ideas I want to share.

Subscribe to LSS Academy

If you enjoyed this article please consider subscribing to our full feed RSS. You can also subscribe by email and have new articles sent directly to your inbox.

Tagged in:,


  1. Anonymous

    April 16, 2007 - 8:01 am

    good stuff. look forward to future taguchi discussion!

  2. Jon Miller

    April 16, 2007 - 9:42 am

    Rock on.

  3. Ron Pereira

    April 16, 2007 - 10:23 am

    Thanks Jon. Right back at ya brutha!

  4. Maciej

    April 16, 2007 - 12:40 pm

    Interesting.. I hadn’t heard of this guy before. What is so statistician unfriendly about this approach though?

  5. Ron Pereira

    April 16, 2007 - 12:52 pm

    Great question Maciej. There main beef is with his DOE’s. A Taguchi DOE is like an enhanced fractional factorial design. Since a Taguchi DOE is not completely orthogonal (balanced) some say they can be misleading. I will write more about Taguchi DOE’s soon as there is some truth to their concern if you choose the wrong Taguchi DOE. By my favorite, the L18, is bullet proof and probably my all time favorite DOE design out there.

  6. walt Eschmann

    July 6, 2008 - 1:09 pm

    Need information on Tagushi loss function for one sided models. The formula I have is L(X) =kx^2 for less is better and l(x)=k*1/x^2. Is x the value from the target, or is x the value of x-t? For example, x=.48 and t+ .50. Should K be multiplied by x and then squared? This is for applications such as chemicals.

    thanks in advance for any help.