Implementing Particle Swarm optimization — Part 2

Induraj
4 min readFeb 23, 2023

--

implementing gradient descent in python (click here)

Implementation of Ant colony optimization using python- solve Traveling salesman problem (click here)

Implementing Particle swarm optimization — Part 1

Particle swarm optimization (PSO) is a population-based optimization algorithm inspired by the social behavior of birds flocking or fish schooling. In PSO, a group of candidate solutions (particles) moves through the search space in search of the optimal solution. Each particle has a position and velocity, which are updated iteratively based on its own experience and that of the group, in order to move toward the best solution. By simulating this social behavior and iterative updating, PSO can efficiently explore the search space and find good solutions to optimization problems.

Advantages:

  • It is a simple optimization algorithm that is easy to understand and implement.
  • It does not require gradient information, making it suitable for non-differentiable and discontinuous objective functions.
  • It can search the entire solution space and can find multiple solutions in a single run.
  • It has the ability to escape from local optima due to its stochastic nature.
  • It can be used in a wide range of applications, including optimization problems in engineering, finance, and healthcare.

Disadvantages:

  • It is not guaranteed to find the global optimum and can converge to a local optimum.
  • The performance of PSO depends heavily on the selection of the algorithm parameters, which can be time-consuming and require a lot of experimentation.
  • PSO can suffer from premature convergence, which occurs when the particles converge too quickly to a suboptimal solution.
  • It can be computationally expensive, especially for high-dimensional problems with a large number of particles.
  • It can be sensitive to noise in the objective function, which can lead to inaccurate results.

Implementation in python:

This code minimizes the function f(x, y) = x^2 + y^2 using particle swarm optimization. The search space is defined by the bounds [-5, 5] in both dimensions. The algorithm starts with num_particles = 20 particles randomly distributed in the search space. The particles are updated for a maximum of max_iterations = 100 iterations. The particles' velocity and position are updated based on their best position and the global best.

import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Define the function to minimize
def function(x, y):
return x**2 + y**2

# Set the bounds of the search space
bounds = [-5, 5]

# Set the parameters of the algorithm
num_particles = 20
max_iterations = 100
c1 = 2
c2 = 2
w = 0.7

# Initialize the particles
particles_position = np.random.uniform(low=bounds[0], high=bounds[1], size=(num_particles, 2))
particles_velocity = np.zeros((num_particles, 2))
particles_best_position = particles_position.copy()
particles_best_value = np.array([function(x, y) for x, y in particles_position])

# Initialize the global best position
global_best_index = np.argmin(particles_best_value)
global_best_position = particles_best_position[global_best_index]
global_best_value = particles_best_value[global_best_index]

# Run the algorithm
for i in range(max_iterations):
# Update the particles velocity and position
r1 = np.random.uniform(size=(num_particles, 2))
r2 = np.random.uniform(size=(num_particles, 2))
particles_velocity = w * particles_velocity \
+ c1 * r1 * (particles_best_position - particles_position) \
+ c2 * r2 * (global_best_position - particles_position)
particles_position = particles_position + particles_velocity

# Apply the bounds
particles_position = np.clip(particles_position, bounds[0], bounds[1])

# Update the particles best position and value
new_particles_best_value = np.array([function(x, y) for x, y in particles_position])
mask = new_particles_best_value < particles_best_value
particles_best_position[mask] = particles_position[mask]
particles_best_value[mask] = new_particles_best_value[mask]

# Update the global best position and value
global_best_index = np.argmin(particles_best_value)
global_best_position = particles_best_position[global_best_index]
global_best_value = particles_best_value[global_best_index]

# Print the best value in the current iteration
print('Iteration {}: Best value = {:.4f}'.format(i+1, global_best_value))

# Plot the function and the global best position
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

x = np.linspace(bounds[0], bounds[1], 100)
y = np.linspace(bounds[0], bounds[1], 100)
X, Y = np.meshgrid(x, y)
Z = function(X, Y)

ax.plot_surface(X, Y, Z, cmap='coolwarm', alpha=0.8)
ax.scatter(global_best_position[0], global_best_position[1], global_best_value, color='red', s=100)

ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Function value')
plt.show()

implementing gradient descent in python (click here)

Implementation of Ant colony optimization using python- solve Traveling salesman problem (click here)

Implementing Particle swarm optimization — Part 1

--

--