Problem Setup

  Blog    |     January 30, 2026

The Hidden Production Line problem involves inferring the hidden states of a production line (e.g., "Good" or "Bad") based on observable outputs (e.g., "Defective" or "Non-defective" items). This is a classic Hidden Markov Model (HMM) problem, solved using the Viterbi algorithm to find the most likely sequence of hidden states.

  • Hidden States: ( S = { \text{Good (G)}, \text{Bad (B)} } )
  • Observations: ( O = { \text{Defective (D)}, \text{Non-defective (N)} } )
  • Initial Probabilities:
    ( \pi{\text{G}} = 0.6 ), ( \pi{\text{B}} = 0.4 )
  • Transition Probabilities:
    • ( P(\text{G} \to \text{G}) = 0.7 ), ( P(\text{G} \to \text{B}) = 0.3 )
    • ( P(\text{B} \to \text{G}) = 0.4 ), ( P(\text{B} \to \text{B}) = 0.6 )
  • Emission Probabilities:
    • ( P(\text{D} \mid \text{G}) = 0.1 ), ( P(\text{N} \mid \text{G}) = 0.9 )
    • ( P(\text{D} \mid \text{B}) = 0.8 ), ( P(\text{N} \mid \text{B}) = 0.2 )
  • Observation Sequence: ( \text{O} = [\text{D}, \text{N}, \text{D}] )

Solution: Viterbi Algorithm

We compute the most likely state sequence step-by-step.

Step 1: Initialize (( t = 0 ), observation = D)

  • State G:
    ( \delta{\text{G}}(0) = \pi{\text{G}} \times P(\text{D} \mid \text{G}) = 0.6 \times 0.1 = 0.06 )
  • State B:
    ( \delta{\text{B}}(0) = \pi{\text{B}} \times P(\text{D} \mid \text{B}) = 0.4 \times 0.8 = 0.32 )

Step 2: ( t = 1 ) (observation = N)

  • State G:
    • Path from G: ( 0.06 \times 0.7 = 0.042 )
    • Path from B: ( 0.32 \times 0.4 = 0.128 )
    • Max: ( 0.128 )
      ( \delta_{\text{G}}(1) = 0.128 \times P(\text{N} \mid \text{G}) = 0.128 \times 0.9 = 0.1152 )
      Backpointer: B (since path from B gave max)
  • State B:
    • Path from G: ( 0.06 \times 0.3 = 0.018 )
    • Path from B: ( 0.32 \times 0.6 = 0.192 )
    • Max: ( 0.192 )
      ( \delta_{\text{B}}(1) = 0.192 \times P(\text{N} \mid \text{B}) = 0.192 \times 0.2 = 0.0384 )
      Backpointer: B (since path from B gave max)

Step 3: ( t = 2 ) (observation = D)

  • State G:
    • Path from G: ( 0.1152 \times 0.7 = 0.08064 )
    • Path from B: ( 0.0384 \times 0.4 = 0.01536 )
    • Max: ( 0.08064 )
      ( \delta_{\text{G}}(2) = 0.08064 \times P(\text{D} \mid \text{G}) = 0.08064 \times 0.1 = 0.008064 )
      Backpointer: G (since path from G gave max)
  • State B:
    • Path from G: ( 0.1152 \times 0.3 = 0.03456 )
    • Path from B: ( 0.0384 \times 0.6 = 0.02304 )
    • Max: ( 0.03456 )
      ( \delta_{\text{B}}(2) = 0.03456 \times P(\text{D} \mid \text{B}) = 0.03456 \times 0.8 = 0.027648 )
      Backpointer: G (since path from G gave max)

Step 4: Backtracking

  • Final max probability: ( \max(0.008064, 0.027648) = 0.027648 ) → State B at ( t = 2 ).
  • Backtrack:
    • ( t = 2 ): B (from G at ( t = 1 ))
    • ( t = 1 ): G (from B at ( t = 0 ))
    • ( t = 0 ): B

Result

The most likely sequence of hidden states is [B, G, B].
This means:

  • Start: Production line is Bad (produces Defective item).
  • Transition: Switches to Good (produces Non-defective item).
  • End: Returns to Bad (produces Defective item).

Key Insight

The Viterbi algorithm efficiently decodes the hidden state sequence by maximizing the joint probability of observations and states, leveraging dynamic programming to avoid brute-force computation. This approach is widely used in systems with latent states, such as fault detection in machinery, speech recognition, and bioinformatics.


Request an On-site Audit / Inquiry

SSL Secured Inquiry