Implementing Gradient Descent In Python — Applied in Quality Control to minimize “Defect rate” —

Induraj
3 min readFeb 22, 2023

--

Gradient descent can be used in quality control to optimize various manufacturing processes, such as the adjustment of machine settings to minimize defects or the adjustment of a production line to optimize yield.

The general approach for implementing gradient descent in quality control is similar to other applications of gradient descent. The first step is to define a cost function that measures the quality of the output (e.g., the number of defects or the yield). Then, the parameters of the manufacturing process are adjusted in a way that minimizes the cost function using gradient descent.

Defining the problem:

For example, suppose a company produces electronic components, and there is a quality control process that measures the defect rate of the components. The defect rate is affected by various factors such as the temperature, the humidity, and the speed of the production line. The goal is to adjust these factors in a way that minimizes the defect rate.

The cost function for this scenario could be the defect rate itself, and the parameters to adjust could be the temperature, the humidity, and the speed of the production line. Gradient descent can be used to find the optimal values of these parameters that minimize the defect rate. The gradient of the cost function with respect to each parameter can be calculated using statistical methods, such as regression analysis.

Once the optimal values of the parameters are found using gradient descent, the manufacturing process can be adjusted accordingly to reduce defects and improve quality.

Implementation:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Load the data
data = pd.read_csv('production_data.csv')

# Normalize the features
def normalize(X):
return (X - np.mean(X, axis=0)) / np.std(X, axis=0)

X = normalize(data[['Temperature', 'Humidity', 'Speed']])
y = normalize(data['Defect Rate'].values.reshape(-1, 1))

# Add intercept term to X
X = np.hstack((np.ones((X.shape[0], 1)), X))

# Set hyperparameters
alpha = 0.1
num_iterations = 1000

# Initialize the parameters
theta = np.zeros((X.shape[1], 1))

# Define the cost function
def compute_cost(X, y, theta):
m = y.shape[0]
J = 1/(2*m) * np.sum((X.dot(theta) - y)**2)
return J

# Define the gradient descent function
def gradient_descent(X, y, theta, alpha, num_iterations):
m = y.shape[0]
J_history = []
for i in range(num_iterations):
theta = theta - alpha/m * X.T.dot(X.dot(theta) - y)
J_history.append(compute_cost(X, y, theta))
return theta, J_history

# Run gradient descent to minimize the cost
theta, J_history = gradient_descent(X, y, theta, alpha, num_iterations)

# Plot the cost over time
plt.plot(J_history)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Gradient Descent Convergence')
plt.show()

# Plot the data and the decision boundary
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,1], X[:,2], y)
x_surf, y_surf = np.meshgrid(np.linspace(-3, 3, 10), np.linspace(-3, 3, 10))
z_surf = theta[0] + theta[1] * x_surf + theta[2] * y_surf
ax.plot_surface(x_surf, y_surf, z_surf, alpha=0.5)
ax.set_xlabel('Temperature')
ax.set_ylabel('Humidity')
ax.set_zlabel('Defect Rate')
plt.title('Gradient Descent Result')
plt.show()

# Predict defect rate for a new production scenario
new_data = np.array([[25, 60, 500]])
new_data_norm = normalize(new_data)
new_data_norm = np.hstack((np.ones((new_data_norm.shape[0], 1)), new_data_norm))
predicted_defect_rate = new_data_norm.dot(theta)
print('Predicted defect rate: {:.2f}'.format(predicted_defect_rate[0][0]))

Other related articls:

implementing gradient descent in python (click here)

Implemention of particle swarm optimization (click here)

Different Variants of gradient descent(click here)

--

--

No responses yet