Home Artificial Intelligence Particle Swarm Optimization (PSO) from scratch. Simplest explanation in python Let’s start with making a function which we’ll be optimizing using PSO. Now let’s discuss the algorithm itself. particle — — — > population — — → swarm Now let’s get back to coding and implement in python what we’ve discussed.

Particle Swarm Optimization (PSO) from scratch. Simplest explanation in python Let’s start with making a function which we’ll be optimizing using PSO. Now let’s discuss the algorithm itself. particle — — — > population — — → swarm Now let’s get back to coding and implement in python what we’ve discussed.

0
Particle Swarm Optimization (PSO) from scratch. Simplest explanation in python
Let’s start with making a function which we’ll be optimizing using PSO.
Now let’s discuss the algorithm itself.
particle — — — > population — — → swarm
Now let’s get back to coding and implement in python what we’ve discussed.

Initially, let’s define our hypoparameters. Like in lots of other metaheuristic algorithms, these variables ought to be adjusted on the best way, and there isn’t a versatile set of values. But let’s follow these ones:

POP_SIZE = 10 #population size 
MAX_ITER = 30 #the quantity of optimization iterations
w = 0.2 #inertia weight
c1 = 1 #personal acceleration factor
c2 = 2 #social acceleration factor

Now let’s create a function which might generate a random population:

def populate(size):
x1,x2 = -10, 3 #x1, x2 = right and left boundaries of our X axis
pop = rnd.uniform(x1,x2, size) # size = amount of particles in population
return pop

If we visualize it, we’ll get something like this:

x1=populate(50) 
y1=function(x1)

plt.plot(x,y, lw=3, label='Func to optimize')
plt.plot(x1,y1,marker='o', ls='', label='Particles')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.show()

Image by creator.

Here you may see that I randomly initialized a population of fifty particles, a few of that are already near the answer.

Now let’s implement the PSO algorithm itself. I commented each row within the code, but when you may have any questions, be at liberty to ask within the comments below.

"""Particle Swarm Optimization (PSO)"""
particles = populate(POP_SIZE) #generating a set of particles
velocities = np.zeros(np.shape(particles)) #velocities of the particles
gains = -np.array(function(particles)) #calculating function values for the population

best_positions = np.copy(particles) #it's our first iteration, so all positions are one of the best
swarm_best_position = particles[np.argmax(gains)] #x with with the best gain
swarm_best_gain = np.max(gains) #highest gain

l = np.empty((MAX_ITER, POP_SIZE)) #array to gather all pops to visualise afterwards

for i in range(MAX_ITER):

l[i] = np.array(np.copy(particles)) #collecting a pop to visualise

r1 = rnd.uniform(0, 1, POP_SIZE) #defining a random coefficient for private behavior
r2 = rnd.uniform(0, 1, POP_SIZE) #defining a random coefficient for social behavior

velocities = np.array(w * velocities + c1 * r1 * (best_positions - particles) + c2 * r2 * (swarm_best_position - particles)) #calculating velocities

particles+=velocities #updating position by adding the rate

new_gains = -np.array(function(particles)) #calculating latest gains

idx = np.where(new_gains > gains) #getting index of Xs, which have a greater gain now
best_positions[idx] = particles[idx] #updating one of the best positions with the brand new particles
gains[idx] = new_gains[idx] #updating gains

if np.max(new_gains) > swarm_best_gain: #if current maxima is greateer than across all previous iters, than assign
swarm_best_position = particles[np.argmax(new_gains)] #assigning one of the best candidate solution
swarm_best_gain = np.max(new_gains) #assigning one of the best gain

print(f'Iteration {i+1} tGain: {swarm_best_gain}')

After 30 iteration we’ve got this:

PSO (w=0.2, c1=1, c2=2). Image by creator.

As you may see the algorithm fell into the local minimum, which isn’t what we wanted. That’s why we’d like to tune our hypoparameters and begin again. This time I made a decision to set inertia weight w=0.8, thus, now the previous velocity has a greater impact on the present state.

PSO (w=0.9, c1=1, c2=2). Image by creator.

And voila, we reached the worldwide minimum of the function. I strongly encourage you to mess around with POP_SIZE, c₁ and c₂. It’ll let you gain a greater understanding of the code and the concept behind PSO. For those who’re interested you may complicate the duty and optimize some 3D function and make a pleasant visualization.

===========================================

[1]Shi Y. Particle swarm optimization //IEEE connections. — 2004. — Т. 2. — №. 1. — С. 8–13.

===========================================

All my articles on Medium are free and open-access, that’s why I’d really appreciate if you happen to followed me here!

P.s. I’m extremely keen about (Geo)Data Science, ML/AI and Climate Change. So if you ought to work together on some project pls contact me in LinkedIn.

🛰️Follow for more🛰️

LEAVE A REPLY

Please enter your comment!
Please enter your name here